Giter Site home page Giter Site logo

openai / openai-node Goto Github PK

View Code? Open in Web Editor NEW
7.1K 114.0 736.0 3.11 MB

The official Node.js / Typescript library for the OpenAI API

Home Page: https://www.npmjs.com/package/openai

License: Apache License 2.0

TypeScript 97.12% JavaScript 1.66% Shell 1.15% HTML 0.02% Dockerfile 0.04% Ruby 0.01%
nodejs openai typescript

openai-node's Introduction

OpenAI Node API Library

NPM version npm bundle size

This library provides convenient access to the OpenAI REST API from TypeScript or JavaScript.

It is generated from our OpenAPI specification with Stainless.

To learn how to use the OpenAI API, check out our API Reference and Documentation.

Installation

npm install openai

You can import in Deno via:

import OpenAI from 'https://deno.land/x/[email protected]/mod.ts';

Usage

The full API of this library can be found in api.md file along with many code examples. The code below shows how to get started using the chat completions API.

import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted
});

async function main() {
  const chatCompletion = await openai.chat.completions.create({
    messages: [{ role: 'user', content: 'Say this is a test' }],
    model: 'gpt-3.5-turbo',
  });
}

main();

Streaming responses

We provide support for streaming responses using Server Sent Events (SSE).

import OpenAI from 'openai';

const openai = new OpenAI();

async function main() {
  const stream = await openai.chat.completions.create({
    model: 'gpt-4',
    messages: [{ role: 'user', content: 'Say this is a test' }],
    stream: true,
  });
  for await (const chunk of stream) {
    process.stdout.write(chunk.choices[0]?.delta?.content || '');
  }
}

main();

If you need to cancel a stream, you can break from the loop or call stream.controller.abort().

Request & Response types

This library includes TypeScript definitions for all request params and response fields. You may import and use them like so:

import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted
});

async function main() {
  const params: OpenAI.Chat.ChatCompletionCreateParams = {
    messages: [{ role: 'user', content: 'Say this is a test' }],
    model: 'gpt-3.5-turbo',
  };
  const chatCompletion: OpenAI.Chat.ChatCompletion = await openai.chat.completions.create(params);
}

main();

Documentation for each method, request param, and response field are available in docstrings and will appear on hover in most modern editors.

Important

Previous versions of this SDK used a Configuration class. See the v3 to v4 migration guide.

Polling Helpers

When interacting with the API some actions such as starting a Run and adding files to vector stores are asynchronous and take time to complete. The SDK includes helper functions which will poll the status until it reaches a terminal state and then return the resulting object. If an API method results in an action which could benefit from polling there will be a corresponding version of the method ending in 'AndPoll'.

For instance to create a Run and poll until it reaches a terminal state you can run:

const run = await openai.beta.threads.runs.createAndPoll(thread.id, {
  assistant_id: assistantId,
});

More information on the lifecycle of a Run can be found in the Run Lifecycle Documentation

Bulk Upload Helpers

When creating and interacting with vector stores, you can use the polling helpers to monitor the status of operations. For convenience, we also provide a bulk upload helper to allow you to simultaneously upload several files at once.

const fileList = [
  createReadStream('/home/data/example.pdf'),
  ...
];

const batch = await openai.vectorStores.fileBatches.uploadAndPoll(vectorStore.id, fileList);

Streaming Helpers

The SDK also includes helpers to process streams and handle the incoming events.

const run = openai.beta.threads.runs
  .stream(thread.id, {
    assistant_id: assistant.id,
  })
  .on('textCreated', (text) => process.stdout.write('\nassistant > '))
  .on('textDelta', (textDelta, snapshot) => process.stdout.write(textDelta.value))
  .on('toolCallCreated', (toolCall) => process.stdout.write(`\nassistant > ${toolCall.type}\n\n`))
  .on('toolCallDelta', (toolCallDelta, snapshot) => {
    if (toolCallDelta.type === 'code_interpreter') {
      if (toolCallDelta.code_interpreter.input) {
        process.stdout.write(toolCallDelta.code_interpreter.input);
      }
      if (toolCallDelta.code_interpreter.outputs) {
        process.stdout.write('\noutput >\n');
        toolCallDelta.code_interpreter.outputs.forEach((output) => {
          if (output.type === 'logs') {
            process.stdout.write(`\n${output.logs}\n`);
          }
        });
      }
    }
  });

More information on streaming helpers can be found in the dedicated documentation: helpers.md

Streaming responses

This library provides several conveniences for streaming chat completions, for example:

import OpenAI from 'openai';

const openai = new OpenAI();

async function main() {
  const stream = await openai.beta.chat.completions.stream({
    model: 'gpt-4',
    messages: [{ role: 'user', content: 'Say this is a test' }],
    stream: true,
  });

  stream.on('content', (delta, snapshot) => {
    process.stdout.write(delta);
  });

  // or, equivalently:
  for await (const chunk of stream) {
    process.stdout.write(chunk.choices[0]?.delta?.content || '');
  }

  const chatCompletion = await stream.finalChatCompletion();
  console.log(chatCompletion); // {id: "…", choices: […], …}
}

main();

Streaming with openai.beta.chat.completions.stream({…}) exposes various helpers for your convenience including event handlers and promises.

Alternatively, you can use openai.chat.completions.create({ stream: true, … }) which only returns an async iterable of the chunks in the stream and thus uses less memory (it does not build up a final chat completion object for you).

If you need to cancel a stream, you can break from a for await loop or call stream.abort().

Automated function calls

We provide the openai.beta.chat.completions.runTools({…}) convenience helper for using function tool calls with the /chat/completions endpoint which automatically call the JavaScript functions you provide and sends their results back to the /chat/completions endpoint, looping as long as the model requests tool calls.

If you pass a parse function, it will automatically parse the arguments for you and returns any parsing errors to the model to attempt auto-recovery. Otherwise, the args will be passed to the function you provide as a string.

If you pass tool_choice: {function: {name: …}} instead of auto, it returns immediately after calling that function (and only loops to auto-recover parsing errors).

import OpenAI from 'openai';

const client = new OpenAI();

async function main() {
  const runner = client.beta.chat.completions
    .runTools({
      model: 'gpt-3.5-turbo',
      messages: [{ role: 'user', content: 'How is the weather this week?' }],
      tools: [
        {
          type: 'function',
          function: {
            function: getCurrentLocation,
            parameters: { type: 'object', properties: {} },
          },
        },
        {
          type: 'function',
          function: {
            function: getWeather,
            parse: JSON.parse, // or use a validation library like zod for typesafe parsing.
            parameters: {
              type: 'object',
              properties: {
                location: { type: 'string' },
              },
            },
          },
        },
      ],
    })
    .on('message', (message) => console.log(message));

  const finalContent = await runner.finalContent();
  console.log();
  console.log('Final content:', finalContent);
}

async function getCurrentLocation() {
  return 'Boston'; // Simulate lookup
}

async function getWeather(args: { location: string }) {
  const { location } = args;
  // … do lookup …
  return { temperature, precipitation };
}

main();

// {role: "user",      content: "How's the weather this week?"}
// {role: "assistant", tool_calls: [{type: "function", function: {name: "getCurrentLocation", arguments: "{}"}, id: "123"}
// {role: "tool",      name: "getCurrentLocation", content: "Boston", tool_call_id: "123"}
// {role: "assistant", tool_calls: [{type: "function", function: {name: "getWeather", arguments: '{"location": "Boston"}'}, id: "1234"}]}
// {role: "tool",      name: "getWeather", content: '{"temperature": "50degF", "preciptation": "high"}', tool_call_id: "1234"}
// {role: "assistant", content: "It's looking cold and rainy - you might want to wear a jacket!"}
//
// Final content: "It's looking cold and rainy - you might want to wear a jacket!"

Like with .stream(), we provide a variety of helpers and events.

Note that runFunctions was previously available as well, but has been deprecated in favor of runTools.

Read more about various examples such as with integrating with zod, next.js, and proxying a stream to the browser.

File uploads

Request parameters that correspond to file uploads can be passed in many different forms:

  • File (or an object with the same structure)
  • a fetch Response (or an object with the same structure)
  • an fs.ReadStream
  • the return value of our toFile helper
import fs from 'fs';
import fetch from 'node-fetch';
import OpenAI, { toFile } from 'openai';

const openai = new OpenAI();

// If you have access to Node `fs` we recommend using `fs.createReadStream()`:
await openai.files.create({ file: fs.createReadStream('input.jsonl'), purpose: 'fine-tune' });

// Or if you have the web `File` API you can pass a `File` instance:
await openai.files.create({ file: new File(['my bytes'], 'input.jsonl'), purpose: 'fine-tune' });

// You can also pass a `fetch` `Response`:
await openai.files.create({ file: await fetch('https://somesite/input.jsonl'), purpose: 'fine-tune' });

// Finally, if none of the above are convenient, you can use our `toFile` helper:
await openai.files.create({
  file: await toFile(Buffer.from('my bytes'), 'input.jsonl'),
  purpose: 'fine-tune',
});
await openai.files.create({
  file: await toFile(new Uint8Array([0, 1, 2]), 'input.jsonl'),
  purpose: 'fine-tune',
});

Handling errors

When the library is unable to connect to the API, or if the API returns a non-success status code (i.e., 4xx or 5xx response), a subclass of APIError will be thrown:

async function main() {
  const job = await openai.fineTuning.jobs
    .create({ model: 'gpt-3.5-turbo', training_file: 'file-abc123' })
    .catch(async (err) => {
      if (err instanceof OpenAI.APIError) {
        console.log(err.status); // 400
        console.log(err.name); // BadRequestError
        console.log(err.headers); // {server: 'nginx', ...}
      } else {
        throw err;
      }
    });
}

main();

Error codes are as followed:

Status Code Error Type
400 BadRequestError
401 AuthenticationError
403 PermissionDeniedError
404 NotFoundError
422 UnprocessableEntityError
429 RateLimitError
>=500 InternalServerError
N/A APIConnectionError

Microsoft Azure OpenAI

To use this library with Azure OpenAI, use the AzureOpenAI class instead of the OpenAI class.

Important

The Azure API shape differs from the core API shape which means that the static types for responses / params won't always be correct.

const openai = new AzureOpenAI();

const result = await openai.chat.completions.create({
  model: 'gpt-4-1106-preview',
  messages: [{ role: 'user', content: 'Say hello!' }],
});

console.log(result.choices[0]!.message?.content);

Retries

Certain errors will be automatically retried 2 times by default, with a short exponential backoff. Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict, 429 Rate Limit, and >=500 Internal errors will all be retried by default.

You can use the maxRetries option to configure or disable this:

// Configure the default for all requests:
const openai = new OpenAI({
  maxRetries: 0, // default is 2
});

// Or, configure per-request:
await openai.chat.completions.create({ messages: [{ role: 'user', content: 'How can I get the name of the current day in Node.js?' }], model: 'gpt-3.5-turbo' }, {
  maxRetries: 5,
});

Timeouts

Requests time out after 10 minutes by default. You can configure this with a timeout option:

// Configure the default for all requests:
const openai = new OpenAI({
  timeout: 20 * 1000, // 20 seconds (default is 10 minutes)
});

// Override per-request:
await openai.chat.completions.create({ messages: [{ role: 'user', content: 'How can I list all files in a directory using Python?' }], model: 'gpt-3.5-turbo' }, {
  timeout: 5 * 1000,
});

On timeout, an APIConnectionTimeoutError is thrown.

Note that requests which time out will be retried twice by default.

Auto-pagination

List methods in the OpenAI API are paginated. You can use for await … of syntax to iterate through items across all pages:

async function fetchAllFineTuningJobs(params) {
  const allFineTuningJobs = [];
  // Automatically fetches more pages as needed.
  for await (const fineTuningJob of openai.fineTuning.jobs.list({ limit: 20 })) {
    allFineTuningJobs.push(fineTuningJob);
  }
  return allFineTuningJobs;
}

Alternatively, you can make request a single page at a time:

let page = await openai.fineTuning.jobs.list({ limit: 20 });
for (const fineTuningJob of page.data) {
  console.log(fineTuningJob);
}

// Convenience methods are provided for manually paginating:
while (page.hasNextPage()) {
  page = page.getNextPage();
  // ...
}

Advanced Usage

Accessing raw Response data (e.g., headers)

The "raw" Response returned by fetch() can be accessed through the .asResponse() method on the APIPromise type that all methods return.

You can also use the .withResponse() method to get the raw Response along with the parsed data.

const openai = new OpenAI();

const response = await openai.chat.completions
  .create({ messages: [{ role: 'user', content: 'Say this is a test' }], model: 'gpt-3.5-turbo' })
  .asResponse();
console.log(response.headers.get('X-My-Header'));
console.log(response.statusText); // access the underlying Response object

const { data: chatCompletion, response: raw } = await openai.chat.completions
  .create({ messages: [{ role: 'user', content: 'Say this is a test' }], model: 'gpt-3.5-turbo' })
  .withResponse();
console.log(raw.headers.get('X-My-Header'));
console.log(chatCompletion);

Making custom/undocumented requests

This library is typed for convenient access to the documented API. If you need to access undocumented endpoints, params, or response properties, the library can still be used.

Undocumented endpoints

To make requests to undocumented endpoints, you can use client.get, client.post, and other HTTP verbs. Options on the client, such as retries, will be respected when making these requests.

await client.post('/some/path', {
  body: { some_prop: 'foo' },
  query: { some_query_arg: 'bar' },
});

Undocumented request params

To make requests using undocumented parameters, you may use // @ts-expect-error on the undocumented parameter. This library doesn't validate at runtime that the request matches the type, so any extra values you send will be sent as-is.

client.foo.create({
  foo: 'my_param',
  bar: 12,
  // @ts-expect-error baz is not yet public
  baz: 'undocumented option',
});

For requests with the GET verb, any extra params will be in the query, all other requests will send the extra param in the body.

If you want to explicitly send an extra argument, you can do so with the query, body, and headers request options.

Undocumented response properties

To access undocumented response properties, you may access the response object with // @ts-expect-error on the response object, or cast the response object to the requisite type. Like the request params, we do not validate or strip extra properties from the response from the API.

Customizing the fetch client

By default, this library uses node-fetch in Node, and expects a global fetch function in other environments.

If you would prefer to use a global, web-standards-compliant fetch function even in a Node environment, (for example, if you are running Node with --experimental-fetch or using NextJS which polyfills with undici), add the following import before your first import from "OpenAI":

// Tell TypeScript and the package to use the global web fetch instead of node-fetch.
// Note, despite the name, this does not add any polyfills, but expects them to be provided if needed.
import 'openai/shims/web';
import OpenAI from 'openai';

To do the inverse, add import "openai/shims/node" (which does import polyfills). This can also be useful if you are getting the wrong TypeScript types for Response (more details).

Logging and middleware

You may also provide a custom fetch function when instantiating the client, which can be used to inspect or alter the Request or Response before/after each request:

import { fetch } from 'undici'; // as one example
import OpenAI from 'openai';

const client = new OpenAI({
  fetch: async (url: RequestInfo, init?: RequestInit): Promise<Response> => {
    console.log('About to make a request', url, init);
    const response = await fetch(url, init);
    console.log('Got response', response);
    return response;
  },
});

Note that if given a DEBUG=true environment variable, this library will log all requests and responses automatically. This is intended for debugging purposes only and may change in the future without notice.

Configuring an HTTP(S) Agent (e.g., for proxies)

By default, this library uses a stable agent for all http/https requests to reuse TCP connections, eliminating many TCP & TLS handshakes and shaving around 100ms off most requests.

If you would like to disable or customize this behavior, for example to use the API behind a proxy, you can pass an httpAgent which is used for all requests (be they http or https), for example:

import http from 'http';
import { HttpsProxyAgent } from 'https-proxy-agent';

// Configure the default for all requests:
const openai = new OpenAI({
  httpAgent: new HttpsProxyAgent(process.env.PROXY_URL),
});

// Override per-request:
await openai.models.list({
  httpAgent: new http.Agent({ keepAlive: false }),
});

Semantic versioning

This package generally follows SemVer conventions, though certain backwards-incompatible changes may be released as minor versions:

  1. Changes that only affect static types, without breaking runtime behavior.
  2. Changes to library internals which are technically public but not intended or documented for external use. (Please open a GitHub issue to let us know if you are relying on such internals).
  3. Changes that we do not expect to impact the vast majority of users in practice.

We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.

We are keen for your feedback; please open an issue with questions, bugs, or suggestions.

Requirements

TypeScript >= 4.5 is supported.

The following runtimes are supported:

  • Node.js 18 LTS or later (non-EOL) versions.
  • Deno v1.28.0 or higher, using import OpenAI from "npm:openai".
  • Bun 1.0 or later.
  • Cloudflare Workers.
  • Vercel Edge Runtime.
  • Jest 28 or greater with the "node" environment ("jsdom" is not supported at this time).
  • Nitro v2.6 or greater.

Note that React Native is not supported at this time.

If you are interested in other runtime environments, please open or upvote an issue on GitHub.

openai-node's People

Contributors

arnif avatar athyuttamre avatar buster95 avatar ceifa avatar jeevnayak avatar just-moh-it avatar logankilpatrick avatar nknj avatar rattrayalex avatar schnerd avatar simonpfish avatar stainless-app[bot] avatar stainless-bot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openai-node's Issues

How to use stream: true?

I'm a bit lost as to how to actually use stream: true in this library.

Example incorrect syntax:

const res = await openai.createCompletion({
  model: "text-davinci-002",
  prompt: "Say this is a test",
  max_tokens: 6,
  temperature: 0,
  stream: true,
});

res.onmessage = (event) => {
  console.log(event.data);
}

CreateCompletion fails with prompts > 478 characters

Describe the bug

openai.createCompletion({}) throws an error with message "Request failed with status code 400" with the following call:
const response = await openai.createCompletion({ model: "text-davinci-003", prompt: p, max_tokens, temperature });
Where
p = "Devin: Hello, how can I help you? you: What can you do for me Devin: I can help you with any questions you may have about our products or services. I can also provide you with information about our company and answer any other questions you may have. you: Okay tell me about your company Devin: Sure! Our company is a leading provider of innovative technology solutions. We specialize in developing custom software and hardware solutions for businesses of all sizes. We have alto"
max_tokens = 4000
temperature = 0.0

My configuration is configured correctly, as all calls with prompt < 478 characters works, but once I get past this character limit, it starts to fail every time.

To Reproduce

  1. call
    const response = await openai.createCompletion({ model: "text-davinci-003", prompt: p, max_tokens, temperature });
    with p = any string longer than 478 characters. Use example string above.

Code snippets

Error response given back to me:
`
{"message":"Request failed with status code 400","name":"Error","stack":"Error: Request failed with status code 400\n    at createError (node_modules/axios/lib/core/createError.js:16:15)\n    at settle (node_modules/axios/lib/core/settle.js:17:12)\n    at IncomingMessage.handleStreamEnd (node_modules/axios/lib/adapters/http.js:322:11)\n    at IncomingMessage.emit (node:events:539:35)\n    at endReadableNT (node:internal/streams/readable:1345:12)\n    at processTicksAndRejections (node:internal/process/task_queues:83:21)","config":{"transitional":{"silentJSONParsing":true,"forcedJSONParsing":true,"clarifyTimeoutError":false},"transformRequest":[null],"transformResponse":[null],"timeout":0,"xsrfCookieName":"XSRF-TOKEN","xsrfHeaderName":"X-XSRF-TOKEN","maxContentLength":-1,"maxBodyLength":-1,"headers":{"Accept":"application/json, text/plain, */*","Content-Type":"application/json","User-Agent":"OpenAI/NodeJS/3.1.0","Authorization":"Bearer sk-***","Content-Length":553},"method":"post","data":"{\"model\":\"text-davinci-003\",\"prompt\":\"Devin: Hello, how can I help you? you: What can you do for me Devin: I can help you with any questions you may have about our products or services. I can also provide you with information about our company and answer any other questions you may have. you: Okay tell me about your company Devin: Sure! Our company is a leading provider of innovative technology solutions. We specialize in developing custom software and hardware solutions for businesses of all sizes. We have alto\",\"max_tokens\":4000,\"temperature\":0}","url":"https://api.openai.com/v1/completions"},"status":400}
`

The above was printed using JSON.stringify FYI

OS

macos

Node version

node 16

Library version

3.1.0

Provide a user identifer

OpenAi Safety best practices:

To help with monitoring for possible misuse, developers serving multiple end-users should pass an additional user parameter to OpenAI with each API call, in which user is a unique ID representing a particular end-user.

With the Python Client you can pass the additional "user" argument:

response = openai.Completion.create(
  engine="davinci",
  prompt="This is a test",
  max_tokens=5,
  user="1"
)

Is this also a feature in this node client?

Internal use of axios causes error inside Cloudflare Workers

Describe the bug

I am trying to use the client inside a Cloudflare Worker and I get an error as follows:

TypeError: adapter is not a function
    at dispatchRequest (index.js:35781:14)
    at Axios.request (index.js:36049:19)
    at Function.wrap [as request] (index.js:34878:20)

Seems to be a common problem as the way Axios checks for XHR breaks in CF workers which is a reduced node environment:

https://community.cloudflare.com/t/typeerror-e-adapter-s-adapter-is-not-a-function/166469/2

Recommendation is to use fetch instead.

To Reproduce

Try to use API in a cloudflare worker

Code snippets

No response

OS

Windows 10

Node version

Node v16

Library version

openai v3.1.0

createImageVariation gives unclear error when input image is not square

Describe the bug

Using the function createImageVariation with a non square image results in the following error:

(node:58341) UnhandledPromiseRejectionWarning: Error: Request failed with status code 400
    at createError (/Users/kenny.lindahl/Dev/test/open-ai-gpt/node_modules/axios/lib/core/createError.js:16:15)
    at settle (/Users/kenny.lindahl/Dev/test/open-ai-gpt/node_modules/axios/lib/core/settle.js:17:12)
    at IncomingMessage.handleStreamEnd (/Users/kenny.lindahl/Dev/test/open-ai-gpt/node_modules/axios/lib/adapters/http.js:322:11)
    at IncomingMessage.emit (events.js:387:35)
    at endReadableNT (internal/streams/readable.js:1317:12)
    at processTicksAndRejections (internal/process/task_queues.js:82:21)

Solution:

Option 1:
The client should know that the image is not square and throw an error and not call the API.

Option 2:
Alternatively it should change the image size (add margin, not stretch the image content) so it can be sent to the API with a successful response.

To Reproduce

Complete node program that reproduces the issue:

const { Configuration, OpenAIApi } = require("openai");
const fs = require("fs");

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);

(async () => {
  const response = await openai.createImageVariation(
    fs.createReadStream(__dirname + "/images/non-square-image.png"),
    2,
    "1024x1024"
  );

  console.log("-------------------");
  console.log(response);
})();

Code snippets

No response

OS

macOS Monterey: 12.5.1 (21G83)

Node version

v14.17.3

Library version

3.1.0

all models except davinci 2 + not working...

Describe the bug

all models except davinci 2 + not working...

I get axios errors when trying to use models such as ada / babbage / etc EXCEPT davinci 2+ -> these models are not working; throws error.

To Reproduce

Fetch using redux.

Error snip:

response: {
status: 404,
statusText: 'Not Found',
headers: {
date: 'Fri, 02 Dec 2022 00:49:31 GMT',
'content-type': 'application/json; charset=utf-8',
'content-length': '158',
connection: 'close',
vary: 'Origin',
},
config: {
transitional: [Object],
adapter: [Function: httpAdapter],
transformRequest: [Array],
transformResponse: [Array],
timeout: 0,
xsrfCookieName: 'XSRF-TOKEN',
xsrfHeaderName: 'X-XSRF-TOKEN',
maxContentLength: -1,
maxBodyLength: -1,
validateStatus: [Function: validateStatus],
headers: [Object],
method: 'post',

Code snippets

import { OpenAIApi, Configuration } from 'openai';
import { Ratelimit } from '@upstash/ratelimit';
import { Redis } from '@upstash/redis';
import getIP from '../../../utils/get-ip';

const configuration = new Configuration({
    apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);

const redis = new Redis({
    url: process.env.UPSTASH_REST_API_DOMAIN,
    token: process.env.UPSTASH_REST_API_TOKEN,
});

const ratelimit = new Ratelimit({
    redis: redis,
    limiter: Ratelimit.slidingWindow(3, '1 d'),
});

export default async function response(req, res) {
    const ip = getIP(req);
    const result = await ratelimit.limit(ip);
    res.setHeader('X-RateLimit-Limit', result.limit);
    res.setHeader('X-RateLimit-Remaining', result.remaining);

    if (req.method !== 'POST') {
        res.status(405).json({ error: 'Method not allowed' });
        return;
    }

    if (!req.body.projectId) {
        res.status(400).json({ error: 'Missing projectId' });
        return;
    }

    if (!result.success) {
        res.status(429).json({
            error: 'You have reached your daily limit of 3 free completions. Try again tomorrow or upgrade your plan in account settings to continue using services regularly.',
        });
        return;
    }

    const completion = await openai.createCompletion({
        model: 'text-davinci-002',
        prompt: req.body.prompt,
        temperature: 0.6,
        max_tokens: 2000,
        presence_penalty: 0.5,
        // frequency_penalty: 0.5,
    });

    try {
        const completeModeration = await openai.createModeration({
            input: completion.data.choices[0].text,
            model: 'text-moderation-latest',
        });
        const moderationRes = completeModeration.data.results[0].flagged;
        if (moderationRes === false) {
            res.status(200).json({ response: completion.data.choices[0].text });
        } else {
            res.status(500).json({
                error: 'Sorry. The output has been flagged for inappropriate content. Please try again.',
            });
        }
    } catch (error) {
        res.status(500).json({ error: error.message });
    }
}

OS

mac

Node version

v18.12.0

Library version

3.1.0

CreateCompletionRequest Types

As part of a 'Pre-launch Review' we've been instructed to provide a user id as part of our completion requests:

Pass a uniqueID for every user w/ each API call (both for Completion & the Content Filter) e.g. user= $uniqueID. This 'user' param can be passed in the request body along with other params such as prompt, max_tokens etc.

However, the CreateCompletionRequest interface does not have an optional user property.

Let me know if I'm missing anything or if anything else is required on my end.

openai.createImage doesn't surface errors

Describe the bug

If I try calling the OpenAI API with openai.createImage and my request is malformed, I just get a generic "request failed with status code 400". If I make the request with cURL instead, I can see the reason my request failed (invalid_request_error, rejected due to safety system).

Possible to surface these errors to the client?

To Reproduce

  1. Try to create an image from the Node client with the text "Elon Musk crying laughing emoji"
  2. Observe the error message
  3. Do the same via cURL
  4. Observe the verbose error message

Code snippets

No response

OS

macOS

Node version

16

Library version

3.1.0

Information of updates upon request

Describe the feature or improvement you're requesting

It would be nice for the ability to ask open ai what new features it has.

Additional context

It can already tell you what the ai its self is capable of, but it seems oblivious to any new changes in updates. It would be nice if upon request it said new bug fixes, features added, and current version model. This would help to know if any issues or quality of life improvements have changed without having to play around to see what works. I know this information is probably already listed somewhere, but It would be nice for even just a link to the information upon the request to the ai.

replace axios with fetch

Describe the feature or improvement you're requesting

  • Axios is not compatible with other runtimes (for example Edge).
  • Significant reduction in size
  • Fetch support for all runtimes (browser, node, edge, deno, workers)
  • The fetch lib could optionally be passed as a dependency

ref: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API

Additional context

When running current version with axios on the Edge I got this error:

An error occurred during OpenAI request [TypeError: adapter is not a function]

Max prompt token not work when using text-davinci-003

Describe the bug

Hi team, I'm using openai pkg with model text-davinci-003 in my code.

I tried to use createCompletion to get a response back, and all work well if I just make prompt length blew 2000.
But if I use a prompt which more than 2000 tokens, it will return 400 error.

I can see the document told us it could accept 4000 tokens.
https://beta.openai.com/docs/models/gpt-3

SO is it a bug will be fixed in the future?

Here is the code with params:

await openAIAgent.createCompletion({
  model: "text-davinci-003",
  prompt: prompt,
  temperature: 0.3,
  max_tokens: 2048,
  top_p: 1.0,
  frequency_penalty: 0.8,
  presence_penalty: 0.0,
})

To Reproduce

Use createCompletion, the prompt should more than 2000 tokens, maybe directly use a prompt between 3000 and 4000
The params same as follow code:

await openAIAgent.createCompletion({
  model: "text-davinci-003",
  prompt: prompt,
  temperature: 0.3,
  max_tokens: 2048,
  top_p: 1.0,
  frequency_penalty: 0.8,
  presence_penalty: 0.0,
})

Code snippets

No response

OS

macOS

Node version

Node v18.12.0

Library version

openai v3.1.0

Missing Async in most functions

All of your examples use await, however you did not set any of your functions as async (at least in version 3.0.0).

so instead of :
const val = await blahblah()

you have to do :
const val = blahblah();
val.then((data)=>{
console.log(data);
});

is this intentional?

Refused to set unsafe header "User-Agent"

Getting this error when trying to run the following code:
Refused to set unsafe header "User-Agent"

const configuration = new Configuration({
    apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
const response = await openai.createCompletion("code-davinci-001", {
  prompt: filePart,
  temperature: 0.1,
  max_tokens: 2000,
  top_p: 1,
  frequency_penalty: 0,
  presence_penalty: 0.5,
});

Create model through API

Describe the feature or improvement you're requesting

Is it possible to create and train a model through the api or only train?

Additional context

No response

Mismatch between `createFile(file: File)` and `createReadStream` in docs

Describe the bug

The typings were updated such that the signature is createFile(file: File), but the docs example shows a ReadStream being provided.

File is not available in Node. What is meant to be done here? Is this a typo, should be File | ReadStream?

To Reproduce

Try to pass a ReadStream to createFile(), see type error.

Code snippets

No response

OS

N/A

Node version

latest

Library version

latest

Can't upload file

I'm trying to upload a file that can then be used to create a fine-tune. It's been passed through the CLI validator so I know it's correct, but I keep getting the following error from Axios:

data: {
      error: {
        message: 'The browser (or proxy) sent a request that this server could not understand.',
        type: 'server_error',
        param: null,
        code: null
      }
    }

Here's how I'm trying to upload the file;

const configuration = new Configuration({
    apiKey: process.env.OPENAI_API_KEY,
  });
  const openai = new OpenAIApi(configuration);

await openai.createFile(`${uploadFilename}.jsonl`, "fine-tune");

Am I doing this right? I can't seem to see what the problem could be.

hi why i can't access openAI

Describe the feature or improvement you're requesting

HI i use VPN to access chatGPT why you block some countrys ???

Additional context

No response

The result is completely wrong for unknow reason

Describe the bug

Use the config below, get the unrelated content in the choices array. And put same prompt on Open AI Play ground, it returns right content ("The tl;dr version of this would be to simply say that the article is about the importance of choosing the right words when communicating, and that the wrong words can easily lead to misunderstanding.").

if I changed prompt: 'Tl;dr, summarize in one paragraph without bullet:\n in one paragraph without bullet.\n', it works fine. Try other content works fine. Just like there's some cache. However I tried to run from AWS lambda and local, both same wrong result.

config: {
model: 'text-davinci-002',
prompt: 'Tl;dr, summarize in one paragraph without bullet:\nsummarize in one paragraph without bullet.\n',
temperature: 0.5,
max_tokens: 320,
best_of: 1,
frequency_penalty: 0,
presence_penalty: 0,
logprobs: 0,
top_p: 1
},


choices: [
{
text: '\nThe article discusses the pros and cons of taking a gap year, or a year off between high school and college. The pros include gaining life experience, taking time to figure out what you want to study, and having the opportunity to travel. The cons include falling behind your peers academically, feeling out of place when you return to school, and struggling to find a job after graduation. Ultimately, the decision to take a gap year is a personal one and depends on what you hope to gain from the experience.',
index: 0,
logprobs: [Object],
finish_reason: 'stop'
}
],

To Reproduce

Simple use the same config above. It keeps happening to me, always same result.

Code snippets

No response

OS

Windows

Node version

Node v16

Library version

v3.0.1

Ability to copy without having to request new format

Describe the feature or improvement you're requesting

The ability to copy was something I found lacking at first, but as I played around a bit more I remembered that this thing can do basically anything, so I told it to put the text it generated into a copy-abel format. Usually, this works and if not I can say put it into code format. It would however be more convenient to have a copy button right there at the top right corner, below the prompt you entered; this way you would not have the same thing twice if you needed, for example, a record of notes. It would also mean fewer things to do for the ai as you would not need to request different formats to do one thing. I am aware I could highlight and copy manually, but sometimes text can get lengthy when fleshing out ideas.

Additional context

This is mostly used for lengthy text generation, and because for the time being you occasionally have to tell the ai to continue what it was typing, it would greatly increase efficiency when moving stuff into a notes folder on EverNote. This is not something that is totally undoable, but adding this feature which the code format already has would make things a bit more complete and natural for creative and productive work.

support Microsoft Azure OpenAI service endpoints

Describe the feature or improvement you're requesting

Update the API configuration to support Azure openai endpoints as well.

In order to use the Python OpenAI library with Microsoft Azure endpoints, we need to set the api_type, api_base and api_version in addition to the api_key. The api_type must be set to 'azure' and the others correspond to the properties of your endpoint. In addition, the deployment name must be passed as the engine parameter.

python

import openai
openai.api_type = "azure"
openai.api_key = "..."
openai.api_base = "https://example-endpoint.openai.azure.com"
openai.api_version = "2022-12-01"

create a completion

completion = openai.Completion.create(engine="deployment-name", prompt="Hello world")

print the completion

print(completion.choices[0].text)

Additional context

No response

createCompletionFromModel is missing

Describe the bug

According to the fine-tuning docs on OpenAi, there should be a createCompletionFromModel function in your API:

const response = await openai.createCompletionFromModel({
  model: FINE_TUNED_MODEL
  prompt: YOUR_PROMPT,
});

There is even a post in the forums that says it was included in versions 2.0.2:

image

But I'm getting errors saying that it's not part of the import. Is that function deprecated? How do we create a completion using a fine-tuning model?

To Reproduce

I forked and cloned the repo to search for createCompletionFromModel to make sure I wasn't missing something, but it came up empty.

Code snippets

No response

OS

mac)S

Node version

Node 16

Library version

latest

Response (completion) is always empty

Describe the bug

I am trying to get the time complexity for some source code, and the response always comes back null. The call to OpenAI works, however.

I am doing this through a Firebase Callable Cloud function, and when I log the response, this is an example of what I typically get:
completion.data.choices: [{"text":"","index":0,"logprobs":null,"finish_reason":"stop"}]

Any idea what's happening here?

Code snippets

const { Configuration, OpenAIApi } = require("openai");
const key =  'xxxxxxxxxxxx';
const configuration = new Configuration({ apiKey: key });
const openai = new OpenAIApi(configuration);

exports.getTimeComplexity = functions.https.onCall(async (data, context) => {
    const selection = data.selection;
    if (selection.length === 0) {
        throw new functions.https.HttpsError('invalid-argument', '[getTimeComplexity] Selection must be > 0 characters long');
    }
    if (!context.auth) {
        throw new functions.https.HttpsError('failed-precondition', '[getTimeComplexity] The function must be called while authenticated.');
    }

    // Get time complexity
    openai.createCompletion({
        model: "text-davinci-003",
        prompt: selection,
        temperature: 0,
        max_tokens: 64,
        top_p: 1.0,
        frequency_penalty: 0.0,
        presence_penalty: 0.0,
        stop: ["\n"],
    }).then((completion) => {
        const timeComplexity = completion.data.choices[0].text;
        console.log(`[getTimeComplexity] Time Complexity ✅: ${timeComplexity}`);
        return { 'success': true, 'complexity': timeComplexity };
    });
});

OS

macOS v12.6

Node version

Node v16

Library version

openai v3.1.0

429 error despite zero API usage

im using the nodejs example from the docs. Inserted my API key.

const { Configuration, OpenAIApi } = require("openai");
const configuration = new Configuration({
  apiKey: "MY-KEY",
});

async function getCompletion () {
  const openai = new OpenAIApi(configuration);
  const response = await openai.createCompletion("text-curie-001", {
    prompt: "Say this is a test",
    max_tokens: 5
  })
  .catch(err => {
    console.log(err);
  });
  console.log(response);
}

getCompletion();

returns:

 response: {
    status: 429,
    statusText: 'Too Many Requests',
    headers: {
      date: 'Thu, 28 Apr 2022 12:51:34 GMT',
      'content-type': 'application/json; charset=utf-8',
      'content-length': '205',
      connection: 'close',
      vary: 'Origin',
      'x-request-id': 'xxxx',
      'strict-transport-security': 'max-age=15724800; includeSubDomains'
    },
    config: {
      transitional: [Object],
      adapter: [Function: httpAdapter],
      transformRequest: [Array],
      transformResponse: [Array],
      timeout: 0,
      xsrfCookieName: 'XSRF-TOKEN',
      xsrfHeaderName: 'X-XSRF-TOKEN',
      maxContentLength: -1,
      maxBodyLength: -1,
      validateStatus: [Function: validateStatus],
      headers: [Object],
      method: 'post',
      data: '{"prompt":"Say this is a test","max_tokens":5}',
      url: 'https://api.openai.com/v1/engines/text-curie-001/completions'
    },
    request: <ref *1> ClientRequest {
      _events: [Object: null prototype],
      _eventsCount: 7,
      _maxListeners: undefined,
      outputData: [],
      outputSize: 0,
      writable: true,
      destroyed: false,
      _last: true,
      chunkedEncoding: false,
      shouldKeepAlive: false,
      _defaultKeepAlive: true,
      useChunkedEncodingByDefault: true,
      sendDate: false,
      _removedConnection: false,
      _removedContLen: false,
      _removedTE: false,
      _contentLength: null,
      _hasBody: true,
      _trailer: '',
      finished: true,
      _headerSent: true,
      socket: [TLSSocket],
      _header: 'POST /v1/engines/text-curie-001/completions HTTP/1.1\r\n' +
        'Accept: application/json, text/plain, */*\r\n' +
        'Content-Type: application/json\r\n' +
        'User-Agent: OpenAI/NodeJS/2.0.5\r\n' +
        'Authorization: Bearer MY-KEY\n' +
        'Content-Length: 46\r\n' +
        'Host: api.openai.com\r\n' +
        'Connection: close\r\n' +
        '\r\n',
      _keepAliveTimeout: 0,
      _onPendingData: [Function: noopPendingOutput],
      agent: [Agent],
      socketPath: undefined,
      method: 'POST',
      maxHeaderSize: undefined,
      insecureHTTPParser: undefined,
      path: '/v1/engines/text-curie-001/completions',
      _ended: true,
      res: [IncomingMessage],
      aborted: false,
      timeoutCb: null,
      upgradeOrConnect: false,
      parser: null,
      maxHeadersCount: null,
      reusedSocket: false,
      host: 'api.openai.com',
      protocol: 'https:',
      _redirectable: [Writable],
      [Symbol(kCapture)]: false,
      [Symbol(kNeedDrain)]: false,
      [Symbol(corked)]: 0,
      [Symbol(kOutHeaders)]: [Object: null prototype]
    },
    data: { error: [Object] }
  },
  isAxiosError: true,
  toJSON: [Function: toJSON]
}

The request returns a 429 Status. I have zero usage on my account and i only make a single request each time.
I tried all engines.

Is this a known problem? Since this problem has little reltation to this node repo i will close this issue as soon as possible.

code-davinci-002

Im using the example code from the playground:

const { Configuration, OpenAIApi } = require("openai");

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);

const response = await openai.createCompletion("code-davinci-002", {
  prompt: "##### Translate this function  from Python into Haskell\n### Python\n    \n    def predict_proba(X: Iterable[str]):\n        return np.array([predict_one_probas(tweet) for tweet in X])\n    \n### Haskell",
  temperature: 0,
  max_tokens: 54,
  top_p: 1,
  frequency_penalty: 0,
  presence_penalty: 0,
  stop: ["###"],
});

This returns a 404 error. Is codex not available as api?

prompt history

Describe the feature or improvement you're requesting

Maybe it exists and I am not finding how, I need to make multiple calls while maintaining history, for example:

1- What is the size of the earth?
2- And of the moon?

Does this functionality exist or would it be a feature?

Additional context

No response

Can the axios dependency please be bumped to a current release of axios?

Describe the bug

Currently it uses axios ^0.26.0 while we are at axios 1.2.1

It's easy to mitigate, but feels really wrong to use such an old version which has a totally different type interface.

To Reproduce

Simply install the openai package and try and pass a current version of axios into the openai instance constructor

Code snippets

No response

OS

osx

Node version

node 19

Library version

openai 3.1.0

Use createCompletion() with fine-tune models

Currently createCompletion() function requires an engine id as first parameter. When I pass the model id instead i get 404 error which makes sense if is waiting an engine.
Is it posible to use a fine tune model?

code writing

Describe the bug

I don't get back all the code from the api.

write me a function in javascript that makes 10 parallel fetch requests simultaneously for 100 iterations

the code is cut off.

To Reproduce

const { Configuration, OpenAIApi } = require("openai");

const argv = require('minimist')(process.argv.slice(2));
console.log(argv.help);
const configuration = new Configuration({
  apiKey: 'xxx',
});
const openai = new OpenAIApi(configuration);

(async () => {

  const response = await openai.createCompletion({
    model: "code-davinci-002",
    prompt: `/* javascript: ${argv.help}  */`,
    temperature: 0,
    max_tokens: 256,
    top_p: 1,
    frequency_penalty: 0,
    presence_penalty: 0,
  });

  response.data.choices.map(c => console.log(c.text));
})();

$ node code.js --help "write me a function that makes 10 parallel fetch requests simultaneously for 100 iterations"

Code snippets

No response

OS

Linux

Node version

19

Library version

3.0.1

Getting more than a dozen errors in api.d.ts

Describe the bug

Updated from v2.0.5 to 3.0.0 of the package and got 16 errors in node_modules/openai/dist/api.d.ts:
[{
"resource": "PROJECT_PATH/node_modules/openai/dist/api.d.ts",
"owner": "typescript",
"code": "2304",
"severity": 8,
"message": "Cannot find name 'File'.",
"source": "ts",
"startLineNumber": 1666,
"startColumn": 24,
"endLineNumber": 1666,
"endColumn": 24
}]

To Reproduce

Simply npm install openai on any typescript project.

Code snippets

"devDependencies": {
    "@types/glob": "^7.2.0",
    "@types/mocha": "^9.1.1",
    "@types/node": "14.x",
    "@types/vscode": "^1.67.0",
    "@typescript-eslint/eslint-plugin": "^5.21.0",
    "@typescript-eslint/parser": "^5.21.0",
    "@vscode/test-electron": "^2.1.3",
    "eslint": "^8.14.0",
    "glob": "^8.0.1",
    "mocha": "^9.2.2",
    "typescript": "^4.6.4"
  },
  "dependencies": {
    "openai": "^3.0.0"
  }

OS

macOS

Node version

Node v16.13.1

Library version

3.0.0

Cannot find name `File`.

Describe the bug

Imported the openai module and I'm getting an error for an undefined type File.

image

image

To Reproduce

  1. Run npm install openai.
  2. Create a new NodeJS server-side project.
    image

Code snippets

No response

OS

Windows 10

Node version

v16.6.1

Library version

v3.1.0

Light/dark mode screen rapidly flashing back and forth

Describe the bug

I was using open chat to make a few different calendars for this year, in the process I had multiple tabs of a new chat open, this way I was able to have one for the entire year, and one for the day-to-day routine, when I got to make the routine, however, I decided to toggle the screen into dark mode as it has been getting late, then upon doing so, the screen rapidly started flashing between the two as i forgot to close the other tabs, this happened for each tab. I closed the first 3 and the problem still seems to persist.

To Reproduce

  1. open a new chat in multiple tabs, two should work but I had 3 to 4 open working on multiple projects.
  2. have the first tabs in light mode, then toggle the third onto dark mode. It was fast to occur, and the button itself should be flashing making it hard to click in an effort to fix
  3. enjoy your broken open chat screen as it annoyingly flashes while you try to be productive.
  4. try to close the first two tabs and see if you can fix the third, it won't but you can try, you can even try closing the third. I am not sure how long it will last but after 30 seconds of trying anything I could think of I decided to report this bug
  5. opening open chat is now taking ages to load, probably because the flickering has not yet stopped
  6. after finally loading the flickering has slowed, but still persists consistently at least on this device.

Code snippets

No response

OS

chrome

Node version

December 15th version

Library version

v3.0.1 is what was filled in as a example of this text box, but I could not currently find it.

Chat bot stopping in the middle of generating text

Describe the bug

I have been using this chatbot to keep track of schedules for a yearly calendar and for keeping/organize notes. When making a calender or really anything that requires a decent amount of typing, the chatbot will stop mid-sentence. This is fixable when you say "continue" or "you stopped halfway" etc. but it is very tedious to keep typing this halfway through a length of text. This is especially annoying when you tell it to put it into a copyable format, and thus the stop results in having to copy multiple things, and editing a few lines so it is coherent where I paste the text into.organizing

To Reproduce

  1. Tell it to write out every day, each week, or monthly benchmarked calendar of the year. It will probably say that it is pointless to write out each individual day, might do the same for every week, but for months it can.
  2. with the calendar it has outlined, tell it to add benchmark dates like all key holidays, maybe put in a few birthdays of people you know, and just make an overbooked sort of calendar.
  3. If it manages to do that without stopping halfway, tell it to put it into a copyable format, or keep adding dates, and make the time slots more defined. Eventually, it will stop halfway through a sentence or forget its typical ending of "I hope this information provided fulfills your request"
  4. Tell it to continue until it gives the rest of the code, it's nothing system-breaking, but very annoying to do each time you want to revise a huge selection of text or add one too many days to your filled calendar.

Code snippets

No response

OS

chrome

Node version

December 15th version

Library version

v3.0.1 is what was filled in as a example of this text box, but I could not currently find it.

Search Option for previous conversations, groupings for similar chats

Describe the feature or improvement you're requesting

It would be amazingly helpful if when exceeding 5 saved conversations, having 6 and up would grant a search bar in the chat selection. This way if you need to pull text or ideas from previous chats, you can simply search not just titles, but keywords. For example, you might ask in a previous conversation about planning out a goal. Maybe after talking and problem-solving keeping your resolutions you ask for a book recommendation or source to learn more from. A week later you are on amazon and you think maybe you want that book the ai recommended. So now the previous chat which was titled "new years resolution goals" will come up when typing in new Year, book/books, or the title of the book such as atomic habits. To take this idea further you could ask to group conversations, maybe you have three priorities for the year beyond general goals and plans, one working out, the second one diet, and the third general goals/habits. You could then group these into one titled New Year. Perhaps the ai would do this finding similar context between threads but even a manual option would be nice.

Additional context

In using the ai so much for general questions, having new ideas, or simply keeping a branching idea separate from the previous topic, my prior chat selection box has the need a lot of scrolling when pulling out prior topics. This would greatly help out when recalling a record of past conversations.

Help me to createImageVariation Fromhttps://url

Describe the feature or improvement you're requesting

#Working Below Code.
let readStream = fs.createReadStream("image.png");
response = await openai.createImageVariation(readStream, 1, "1024x1024");

#Not Working Below Code.
let readStream = https.get("https://storage.googleapis.com/inceptivestudio/1672042338704.png", (stream) => {
return stream;
});
response = await openai.createImageVariation(readStream, 1, "1024x1024");

Additional context

No response

Deleting Fine-tune model results in a 404

Using the call

const response = await openai.listFineTunes();

To get the list of my fine tunes. Then from that list, I'm using the fine_tuned_model field and passing that to:

await openai.deleteModel(model as string);

I receive a 404 error back that says;

{
  error: {
    message: 'That model does not exist',
    type: 'invalid_request_error',
    param: 'model',
    code: null
  }
}

The url looks like this:

https://api.openai.com/v1/models/curie%3Aft-personal-2022-05-02-16-11-13

Following the steps in the documentation here: https://beta.openai.com/docs/api-reference/fine-tunes/delete-model

Thanks!

Edit request giving 404

Hi openai,

I'm currently using completion API and am attempting to use the edit API now as well.

Code:

const result = await openAI.createCompletion('text-davinci-002', {
    prompt: `${content}\n\nTl;dr`,
    temperature: 0.7,
    max_tokens: 60,
    top_p: 1.0,
    frequency_penalty: 0.0,
    presence_penalty: 0.0,
  })

const result = await openAI.createEdit('text-davinci-002', {
    input: content,
    instruction: 'Rewrite this more simply',
  })

The first request has been working for months and still is, but the second returns this

(node:5013) UnhandledPromiseRejectionWarning: Error: Request failed with status code 404
    at createError (/Users/zfoster/gravity/node_modules/openai/node_modules/axios/lib/core/createError.js:16:15)
    at settle (/Users/zfoster/gravity/node_modules/openai/node_modules/axios/lib/core/settle.js:17:12)
    at IncomingMessage.handleStreamEnd (/Users/zfoster/gravity/node_modules/openai/node_modules/axios/lib/adapters/http.js:322:11)
    at IncomingMessage.emit (events.js:412:35)
    at IncomingMessage.emit (domain.js:470:12)
    at endReadableNT (internal/streams/readable.js:1317:12)
    at processTicksAndRejections (internal/process/task_queues.js:82:21)

Let me know if I need to make any changes and if there's an existing example for doing something like a "simplification rewrite". Basically trying to summarize and rewrite the text in more simple words with these two operations.

Thanks!

Classification betas not leading to expected results

What happens:
Passing classification_betas, e.g. classification_betas: [1, 0.5] leads to equal f-beta values in the resulting CSV file, although precision and recall differ. The columns are named correctly in the resulting file, e.g. classification/f0.5 etc, but the cell values always equal f-1, not the β of the respective column.

What I expected:
Differing f-values, as the parameter weighs precision higher (1x, 2x, ... times as much) than recall.

Maybe in interpreted the docs wrongly though, and this parameter is supposed to be used differently.

Input sanitizing causes 400 BAD_REQUEST

I was using the openai.createClassification method and started getting 400 BAD REQUEST when I introduced the input string (attached to the bottom) as one of the examples for labeling.

I believe there is some sanitizing that fails in some area.

href=\"https:// becomes href=\\"https:// which is not valid JSON for payload.

Here is the JSON which seems to be entirely valid

Here is the raw request which was created by the method and seems to be invalid JSON.

Error: Parse error on line 11:
...rom <a href=\\"				https: //github.com/
----------------------^
Expecting 'EOF', '}', ':', ',', ']', got 'undefined'

Here is my code

It is posibble to save chat just like chatGPT does?

Describe the feature or improvement you're requesting

I like to use the api to ask question, but you know the chatGPT will save the chat conversation, next time you ask a question in the chat, it will answer based on the chat conversations. But this api of the openai library using seems did not save the chat.

here is the test:
image
image

Here is the chatGPT did.
image

Additional context

I dont know why it did not save that chat as chatGPT does, because its free account or something else?
I can pay for the api that can save chat and then response just like chatGPT did. I need help. Thanks.

CreateModeration gives 400

Describe the bug

createModeration is giving 400 for all models/input variations.

To Reproduce

  const openai = new OpenAIApi(
    new Configuration({
      apiKey: process.env.OPEN_AI_SECRET,
    })
  );

  // gives 400 without clues
  const moderation = await openai.createModeration({
    model: "text-davinci-003",
    input: "This is a very nice text",
  });

OS

macOs

Node version

v18.12.1

Library version

3.1.0

Add type definitions for TS support

Describe the feature or improvement you're requesting

Right now, migrating to Typescript or creating a d.ts file manually are few of the options.
If the API is going to be minimal in the long term, then simple index.d.ts created manually should be enough.

Additional context

No response

Usage type for Completion Requests is missing

Description

According to the docs a response to a completion request should have a usage property that allows to see how many tokens have been used for the request + response. Manually checking the response of openai.createCompletion also shows that the usage property exists in response.data:

const response = await openai.createCompletion({
    model: 'text-davinci-002',
    prompt: `<someprompt>`,
  });

console.log(response.data.usage)

However, the CreateCompletionResponse type does not include usage and thus Typescript is throwing an error when trying to access usage in an openai completion response.

Expected Behavior

openai should have a type definition for usage in CreateCompletionResponse that allows to see & access the used tokens in a request.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.