Giter Site home page Giter Site logo

st3w4r / openai-partial-stream Goto Github PK

View Code? Open in Web Editor NEW
57.0 5.0 2.0 655 KB

Turn a stream of token into a parsable JSON object as soon as possible. Enable Streaming UI for AI app based on LLM.

Home Page: https://partial.stream/

License: MIT License

TypeScript 43.92% Makefile 0.91% HTML 55.01% JavaScript 0.16%
chatgpt chatgpt-api function-calling gpt-3 gpt-4 json-parser json-parsing llm openai openai-api

openai-partial-stream's Introduction

Parse Partial JSON Stream - Turn your slow AI app into an engaging real-time app

  • Convert a stream of token into a parsable JSON object before the stream ends.
  • Implement Streaming UI in LLM-based AI application.
  • Leverage OpenAI Function Calling for early stream processing.
  • Parse JSON streams into distinct entities.
  • Engage your users with a real-time experience.

json_stream_color

Follow the Work

Install

To install dependencies:

npm install --save openai-partial-stream

Usage with simple stream

Turn a stream of token into a parsable JSON object as soon as possible.

import OpenAi from "openai";
import { OpenAiHandler, StreamMode } from "openai-partial-stream";

// Set your OpenAI API key as an environment variable: OPENAI_API_KEY
const openai = new OpenAi({ apiKey: process.env.OPENAI_API_KEY });

const stream = await openai.chat.completions.create({
  messages: [{ role: "system", content: "Say hello to the world." }],
  model: "gpt-3.5-turbo", // OR "gpt-4"
  stream: true, // ENABLE STREAMING
  temperature: 1,
  functions: [
    {
      name: "say_hello",
      description: "say hello",
      parameters: {
        type: "object",
        properties: {
          sentence: {
            type: "string",
            description: "The sentence generated",
          },
        },
      },
    },
  ],
  function_call: { name: "say_hello" },
});

const openAiHandler = new OpenAiHandler(StreamMode.StreamObjectKeyValueTokens);
const entityStream = openAiHandler.process(stream);

for await (const item of entityStream) {
  console.log(item);
}

Output:

{ index: 0, status: 'PARTIAL', data: {} }
{ index: 0, status: 'PARTIAL', data: { sentence: '' } }
{ index: 0, status: 'PARTIAL', data: { sentence: 'Hello' } }
{ index: 0, status: 'PARTIAL', data: { sentence: 'Hello,' } }
{ index: 0, status: 'PARTIAL', data: { sentence: 'Hello, world' } }
{ index: 0, status: 'PARTIAL', data: { sentence: 'Hello, world!' } }
{ index: 0, status: 'COMPLETED', data: { sentence: 'Hello, world!' } }

Usage with stream and entity parsing

Validate the data against a schema and only return the data when it is valid.

import { z } from "zod";
import OpenAi from "openai";
import { OpenAiHandler, StreamMode, Entity } from "openai-partial-stream";

// Set your OpenAI API key as an environment variable: OPENAI_API_KEY
const openai = new OpenAi({ apiKey: process.env.OPENAI_API_KEY });

const stream = await openai.chat.completions.create({
  messages: [{ role: "system", content: "Say hello to the world." }],
  model: "gpt-3.5-turbo", // OR "gpt-4"
  stream: true, // ENABLE STREAMING
  temperature: 1,
  functions: [
    {
      name: "say_hello",
      description: "say hello",
      parameters: {
        type: "object",
        properties: {
          sentence: {
            type: "string",
            description: "The sentence generated",
          },
        },
      },
    },
  ],
  function_call: { name: "say_hello" },
});

const openAiHandler = new OpenAiHandler(StreamMode.StreamObjectKeyValueTokens);
const entityStream = openAiHandler.process(stream);

// Entity Parsing to validate the data
const HelloSchema = z.object({
  sentence: z.string().optional(),
});

const entityHello = new Entity("sentence", HelloSchema);
const helloEntityStream = entityHello.genParse(entityStream);

for await (const item of helloEntityStream) {
  console.log(item);
}

Output:

{ index: 0, status: 'PARTIAL', data: {}, entity: 'sentence' }
{ index: 0, status: 'PARTIAL', data: { sentence: '' }, entity: 'sentence' }
{ index: 0, status: 'PARTIAL', data: { sentence: 'Hi' }, entity: 'sentence' }
{ index: 0, status: 'PARTIAL', data: { sentence: 'Hi,' }, entity: 'sentence' }
{ index: 0, status: 'PARTIAL', data: { sentence: 'Hi, world' }, entity: 'sentence' }
{ index: 0, status: 'PARTIAL', data: { sentence: 'Hi, world!' }, entity: 'sentence' }
{ index: 0, status: 'COMPLETED', data: { sentence: 'Hi, world!' }, entity: 'sentence'}

Usage with stream and entity parsing with multiple entities

import { z } from "zod";
import OpenAi from "openai";
import { OpenAiHandler, StreamMode, Entity } from "openai-partial-stream";

// Intanciate OpenAI client with your API key
const openai = new OpenAi({
  apiKey: process.env.OPENAI_API_KEY,
});

const PostcodeSchema = z.object({
  name: z.string().optional(),
  postcode: z.string().optional(),
  population: z.number().optional(),
});

// Call the API with stream enabled and a function
const stream = await openai.chat.completions.create({
  messages: [
    {
      role: "system",
      content: "Give me 3 cities and their postcodes in California.",
    },
  ],
  model: "gpt-3.5-turbo", // OR "gpt-4"
  stream: true, // ENABLE STREAMING
  temperature: 1.1,
  functions: [
    {
      name: "set_postcode",
      description: "Set a postcode and a city",
      parameters: {
        type: "object",
        properties: {
          // The name of the entity
          postcodes: {
            type: "array",
            items: {
              type: "object",
              properties: {
                name: {
                  type: "string",
                  description: "Name of the city",
                },
                postcode: {
                  type: "string",
                  description: "The postcode of the city",
                },
                population: {
                  type: "number",
                  description: "The population of the city",
                },
              },
            },
          },
        },
      },
    },
  ],
  function_call: { name: "set_postcode" },
});

// Select the mode of the stream parser
// - StreamObjectKeyValueTokens: (REALTIME)     Stream of JSON objects, key value pairs and tokens
// - StreamObjectKeyValue:       (PROGRESSIVE)  Stream of JSON objects and key value pairs
// - StreamObject:               (ONE-BY-ONE)   Stream of JSON objects
// - NoStream:                   (ALL-TOGETHER) All the data is returned at the end of the process
const mode = StreamMode.StreamObject;

// Create an instance of the handler
const openAiHandler = new OpenAiHandler(mode);
// Process the stream
const entityStream = openAiHandler.process(stream);
// Create an entity with the schema to validate the data
const entityPostcode = new Entity("postcodes", PostcodeSchema);
// Parse the stream to an entity, using the schema to validate the data
const postcodeEntityStream = entityPostcode.genParseArray(entityStream);

// Iterate over the stream of entities
for await (const item of postcodeEntityStream) {
  if (item) {
    // Display the entity
    console.log(item);
  }
}

Output:

{ index: 0, status: 'COMPLETED', data: { name: 'Los Angeles', postcode: '90001', population: 3971883 }, entity: 'postcodes' }
{ index: 1, status: 'COMPLETED', data: { name: 'San Francisco', postcode: '94102', population: 883305 }, entity: 'postcodes' }
{ index: 2, status: 'COMPLETED', data: { name: 'San Diego', postcode: '92101', population: 1425976 }, entity: 'postcodes'}

Modes

Select a mode from the list below that best suits your requirements:

  1. NoStream
  2. StreamObject
  3. StreamObjectKeyValue
  4. StreamObjectKeyValueTokens

NoStream

Results are returned only after the entire query completes.

NoStream Details
โœ… Single query retrieves all data
โœ… Reduces network traffic
โš ๏ธ User experience may be compromised due to extended wait times

StreamObject

An event is generated for each item in the list. Items appear as they become ready.

StreamObject Details
โœ… Each message corresponds to a fully-formed item
โœ… Fewer messages
โœ… All essential fields are received at once
โš ๏ธ Some delay: users need to wait until an item is fully ready to update the UI

StreamObjectKeyValue

Objects are received in fragments: both a key and its corresponding value are sent together.

StreamObjectKeyValue Details
โœ… Users can engage with portions of the UI
โœ… Supports more regular UI updates
โš ๏ธ Higher network traffic
โš ๏ธ Challenges in enforcing keys due to incomplete objects

StreamObjectKeyValueTokens

Keys are received in full, while values are delivered piecemeal until they're complete. This method offers token-by-token UI updating.

StreamObjectKeyValueToken Details
โœ… Offers a dynamic user experience
โœ… Enables step-by-step content consumption
โœ… Decreases user waiting times
โš ๏ธ Possible UI inconsistencies due to values arriving incrementally
โš ๏ธ Augmented network traffic

Demo

Stream of JSON object progressively by key value pairs:

Color_Streaming_Mode_3_colors.mov

Stream of JSON objects in realtime:

json_stream_sf.mp4

References

npm pakcage

openai-partial-stream's People

Contributors

dependabot[bot] avatar ngasull avatar st3w4r avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

Forkers

jedimonkey

openai-partial-stream's Issues

NextJS 13 TS/Zod dependency issue

First off thank you for contributing towards building this. I think this is super useful in building streaming applications with function calling.

I am trying to use this in React with NextJS 13. I am also using the Vercel AI SDK's useCompletion hook on the client side that then calls an API route in the api directiory to stream the responses. I get this error with this setup:

./node_modules/typescript/lib/typescript.js
Critical dependency: the request of a dependency is an expression

Import trace for requested module:
./node_modules/typescript/lib/typescript.js
./node_modules/zod-to-ts/dist/index.js
./node_modules/openai-partial-stream/dist/chunk-66SDEIDM.mjs
./node_modules/openai-partial-stream/dist/index.mjs
./src/app/api/completion/route.ts
./node_modules/next/dist/build/webpack/loaders/next-app-loader.js?name=app%2Fapi%2Fcompletion%2Froute&page=%2Fapi%2Fcompletion%2Froute&appPaths=&pagePath=private-next-app-dir%2Fapi%2Fcompletion%2Froute.ts&appDir=%2FUsers%2Fakashdeepdeb%2FDesktop%2Ftopshelf%2Fsrc%2Fapp&pageExtensions=tsx&pageExtensions=ts&pageExtensions=jsx&pageExtensions=js&rootDir=%2FUsers%2Fakashdeepdeb%2FDesktop%2Ftopshelf&isDev=true&tsconfigPath=tsconfig.json&basePath=&assetPrefix=&nextConfigOutput=&preferredRegion=&middlewareConfig=e30%3D!./src/app/api/completion/route.ts?__next_edge_ssr_entry__
<w> [webpack.cache.Pack

Any ideas if this is related to a zod to ts version issue?

Does JsonCloser work for nested objects?

I have an entity/schema that has nested objects. Even with StreamObjectKeyValueTokens, my streaming behaves more like StreamObjectKeyValue whereby one field only becomes available when its value is complete. I suspect it's because JsonCloser doesn't work with a entity/schema with nested objects.

NextJS 13

Hello, thank you for the package, i am trying to implement the color example into a next JS app, but i don't understand how to,

here is my api/colors :

 import { z } from "zod";
import { Configuration, OpenAIApi } from 'openai-edge';
import { OpenAIStream, StreamingTextResponse } from 'ai';

import { OpenAiHandler, StreamMode, Entity } from "openai-partial-stream";

// Set the runtime to edge for best performance
export const runtime = 'edge';



const config = new Configuration({
    apiKey: process.env.OPENAI_API_KEY,
  });

  const openai = new OpenAIApi(config);


async function callGenerateColors(
    mode = StreamMode.StreamObjectKeyValueTokens,
) {
    // Call OpenAI API, with function calling
    // Function calling: https://openai.com/blog/function-calling-and-other-api-updates
    const stream = await openai.createChatCompletion({
        messages: [
            {
                role: "user",
                content:
                    "Give me a palette of 2 gorgeous color with the hex code, name and a description.",
            },
        ],
        model: "gpt-3.5-turbo", // OR "gpt-4"
        stream: true, // ENABLE STREAMING
        temperature: 1.3,
        functions: [
            {
                name: "give_colors",
                description: "Give a list of color",
                parameters: {
                    type: "object",
                    properties: {
                        colors: {
                            type: "array",
                            items: {
                                type: "object",
                                properties: {
                                    hex: {
                                        type: "string",
                                        description:
                                            "The hexadecimal code of the color",
                                    },
                                    name: {
                                        type: "string",
                                        description: "The color name",
                                    },
                                    description: {
                                        type: "string",
                                        description:
                                            "The description of the color",
                                    },
                                },
                            },
                        },
                    },
                },
            },
        ],
        function_call: { name: "give_colors" },
    });

    // Handle the stream from OpenAI client
    const openAiHandler = new OpenAiHandler(mode);
    // Parse the stream to valid JSON
    const entityStream = openAiHandler.process(stream);

    return entityStream;
}


export  async function POST(
  req: Request
) {
    // Instantiate OpenAI client

        // // Select the mode of the stream parser
 const mode = StreamMode.StreamObject; // ONE-BY-ONE
        // const colorEntityStream = await callGenerateColors(mode);

    return   callGenerateColors(mode);   
}

and my page:


import React, { useState, useEffect } from 'react';

import { useCompletion } from 'ai/react';


interface Color {
  hex: string;
  name: string;
  description: string;
}

const IndexPage = () => {
  const [colors, setColors] = useState<Color[]>([]);
  const [loading, setLoading] = useState<boolean>(true);

  const {
    completion,
    input,
    stop,
    isLoading,
    handleInputChange,
    handleSubmit,
  } = useCompletion({
    api: '/api/colors',
  });

  if (isLoading) return <p>Loading...</p>;

  return (
    <div>
      <h1>Generated Colors</h1>

      {completion}
      <ul>
        {colors.map((color, index) => (
          <li key={index}>
            <strong>{color.name}</strong> ({color.hex}): {color.description}
          </li>
        ))}
      </ul>
    </div>
  );
};

export default IndexPage;

do you have any idea about how i could integrate your package into nextJS 13? thank you!

Example with postcodes not working

Hey! Unfortunately I'm having a hard time getting this to run correctly.

I'm using copy-pasted postcodes example from readme.

Unfortunately, it only detects the last entity:

image

image

Expected output based on ReadMe:

{ index: 0, status: 'COMPLETED', data: { name: 'Los Angeles', postcode: '90001', population: 3971883 }, entity: 'postcodes' }
{ index: 1, status: 'COMPLETED', data: { name: 'San Francisco', postcode: '94102', population: 883305 }, entity: 'postcodes' }
{ index: 2, status: 'COMPLETED', data: { name: 'San Diego', postcode: '92101', population: 1425976 }, entity: 'postcodes'}

If I iterate over the entityStream instead, it looks like this:

image

image

Is this some known issue, or did I miss anything? Thanks for your help!

Generate TypeScript declaration maps

I'd really like to add support for declaration maps so that in my VS Code I can use Go-to-Definition to navigate to the .ts files instead of .mts or .mjs files.

What I've tried to do:

  1. Add "declaration": true and "declarationMap": true to packages/openai-partial-stream/tsconfig.json.
  2. Run npm run build.
  3. Still, no .map files were generated.

Since the npm run build uses tsup, I suspect it's related to egoist/tsup#488 and apparently this remains unsupported. I'd love to hear your thoughts. I'm happy to help.

NodeJS + Express

Thank you so much for the work on this! It's an awesome contribution. On running the Color example in an endpoint I get status of "COMPLETED", when the output didn't finish streaming yet.

Screenshot 2023-11-12 at 10 06 59

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.