Giter Site home page Giter Site logo

openai-api-rs's Introduction

OpenAI API client library for Rust (unofficial)

The OpenAI API client Rust library provides convenient access to the OpenAI API from Rust applications.

Check out the docs.rs.

Installation:

Cargo.toml

[dependencies]
openai-api-rs = "4.0.8"

Usage

The library needs to be configured with your account's secret key, which is available on the website. We recommend setting it as an environment variable. Here's an example of initializing the library with the API key loaded from an environment variable and creating a completion:

Set OPENAI_API_KEY to environment variable

$ export OPENAI_API_KEY=sk-xxxxxxx

Create client

let client = Client::new(env::var("OPENAI_API_KEY").unwrap().to_string());

Create request

let req = ChatCompletionRequest::new(
    GPT4.to_string(),
    vec![chat_completion::ChatCompletionMessage {
        role: chat_completion::MessageRole::user,
        content: chat_completion::Content::Text(String::from("What is bitcoin?")),
        name: None,
    }],
);

Send request

let result = client.chat_completion(req)?;
println!("Content: {:?}", result.choices[0].message.content);

Set OPENAI_API_BASE to environment variable (optional)

$ export OPENAI_API_BASE=https://api.openai.com/v1

Example of chat completion

use openai_api_rs::v1::api::Client;
use openai_api_rs::v1::chat_completion::{self, ChatCompletionRequest};
use openai_api_rs::v1::common::GPT4;
use std::env;

fn main() -> Result<(), Box<dyn std::error::Error>> {
    let client = Client::new(env::var("OPENAI_API_KEY").unwrap().to_string());

    let req = ChatCompletionRequest::new(
        GPT4.to_string(),
        vec![chat_completion::ChatCompletionMessage {
            role: chat_completion::MessageRole::user,
            content: chat_completion::Content::Text(String::from("What is bitcoin?")),
            name: None,
        }],
    );

    let result = client.chat_completion(req)?;
    println!("Content: {:?}", result.choices[0].message.content);
    println!("Response Headers: {:?}", result.headers);

    Ok(())
}

More Examples: examples

Check out the full API documentation for examples of all the available functions.

Supported APIs

License

This project is licensed under MIT license.

openai-api-rs's People

Contributors

anush008 avatar avastmick avatar d-roak avatar dejavu1987 avatar dongri avatar hytracen avatar logankilpatrick avatar lordi avatar n3mes1s avatar night-cruise avatar onlyferris avatar ryanolson avatar sharifhsn avatar szabgab avatar zh4ngx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openai-api-rs's Issues

Derive serialize for EmbeddingResponse

Describe the feature or improvement you're requesting

Hi. Thank you for adding derive serialize for ChatCompletionResponse.

I would also like to ask for this to be done for EmbeddingResponse as well :)

#[derive(Debug, Deserialize)]
pub struct EmbeddingResponse {
    pub object: String,
    pub data: Vec<EmbeddingData>,
    pub model: String,
    pub usage: Usage,
}

I would like that in addition to Debug and Deserialize, this one would also derive Serialize as well.

Additional context

Same motivation as in issue #69 (nice), but for EmbeddingResponse

How to pass organization header?

Thanks for making this crate, it seems very useful :)

Btw, on https://platform.openai.com/account/api-keys it says:

image

https://platform.openai.com/docs/api-reference/requesting-organization says:

Requesting organization

For users who belong to multiple organizations, you can pass a header to specify which organization is used for an API request. Usage from these API requests will count against the specified organization's subscription quota.
Example curl command:

curl https://api.openai.com/v1/models \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "OpenAI-Organization: org-Q4qnf6HKZtLiqUySAB41oMeY"

Could you please add this header functionality to this crate, so that we can select the org? :)

Update API for GPT4 Vision Preview

Describe the bug

When using the GPT 4 Vision Preview API, the finish_reason field is missing. It needs to be optional, and a new finish_details field needs to be added (and also optional).

To Reproduce

Follow the example in the Quickstart: https://platform.openai.com/docs/guides/vision/quick-start

  1. Start a completion using the GPT4_VISION_PREVIEW model
  2. Use the text and image_url in the example
  3. Note the error about the missing field

Code snippets

let req = ChatCompletionRequest {
        model: GPT4_VISION_PREVIEW.to_string(),
        messages: vec![
            chat_completion::ChatCompletionMessage {
            role: MessageRole::user,
            content: String::from(
                r#"[
          {
            "type": "text",
            "text": "What's in this image?"
          },
          {
            "type": "image_url",
            "image_url": {
              "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
        }
        }
]"#,
            ),
            name: None,
            function_call: None,
        }],
        functions: None,
        function_call: None,
        temperature: None,
        top_p: None,
        n: None,
        response_format: None,
        stream: None,
        stop: None,
        max_tokens: Some(300),
        presence_penalty: None,
        frequency_penalty: None,
        logit_bias: None,
        user: None,
    };

OS

Linux

Rust version

Rust v1.73.0

Library version

openai-api-rs v2.1.1

Field marked as deprecated causes pipeline failure even though value used is `None`

Describe the bug

error: use of deprecated field `v1::chat_completion::ChatCompletionRequest::function_call`: This field is deprecated. Use `tool_choice` instead.
  --> src/v1/chat_completion.rs:77:13
   |
77 |             function_call: None,
   |             ^^^^^^^^^^^^^^^^^^^

It seems that marking the field as deprecated is not working as intended. Even though "None" is specified, it triggers a complaint, which makes pipeline fail. In the code where the ChatCompletionRequest constructs itself using its new(...) method.

This change was introduced in #54

To Reproduce

See pipeline in main branch

Code snippets

No response

OS

All

Rust version

Any

Library version

openai-api-rs v2.1.7

Suggestion: Remove Option<T> from required fields

Describe the feature or improvement you're requesting

Hey. I've come to notice that, almost all the fields in the request structs have Option<T> even though some of them are required fields according to the OpenAI spec. Using None in them just returns a BAD_REQUEST.

Can I work on this and open a PR to remove Option<T> from all the required fields according to the OpenAI reference?

Additional context

No response

Wasm support

Currently the crate always depends on tokio which means it can't be compiled to wasm for use in frontends that want to make OpenAI API requests.
It would be great to make the tokio dependency optional to allow compiling to wasm.
(reqwest will work when compiling to wasm.)

Re-use reqwest::Client instead of constructing one per request

Client::post creates a new reqwest::Client on each request.

Reasoning from the reqwest docs on why you should re-use a client instead of building a new one:

The Client holds a connection pool internally, so it is advised that you create one and reuse it.

You do not have to wrap the Client in an Rc or Arc to reuse it, because it already uses an Arc internally.

Feature request: function variant in MessageRole enum

Describe the feature or improvement you're requesting

Hey. Thanks for the great library. Have used it quite a lot. I've been trying to work out a feature using function calls. But I noticed that the MessageRole enum doesn't have the function variant.

#[derive(Debug, Serialize, Deserialize)]
#[allow(non_camel_case_types)]
pub enum MessageRole {
user,
system,
assistant,
}

As mentioned in the OpenAI reference here.
https://platform.openai.com/docs/api-reference/chat/create#chat/create-role

Additional context

No response

Correctly version breaking changes

Describe the bug

There have been a few breaking changes that have not been labeled as breaking: i.e., the major version of the cargo package is not being incremented to indicate breaking changes.

Specifically, it looks like #46 should have been a breaking change since it mutated the chat response types to be an Option

Without consistent and predicable versioning, it makes using this library brittle during upgrades.

A few suggestions:

  • Declare a VERSIONING.md and document what consumers can expect from versions of this package
  • Stick to semver semantics (and mark breaking changes as major version releases)

To Reproduce

  1. Use version 2.0.2
  2. Attempt to upgrade to the latest 2.1.7
  3. Notice the following build errors using chat response types:
error[E0308]: mismatched types
   --> src/conversation/mod.rs:102:25
    |
101 |                     match response.choices[0].finish_reason {
    |                           --------------------------------- this expression has type `std::option::Option<FinishReason>`
102 |                         FinishReason::function_call => {
    |                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `Option<FinishReason>`, found `FinishReason`
    |
    = note: expected enum `std::option::Option<FinishReason>`
               found enum `FinishReason`

I would not expect type changes from a minor version bump.

OS

Linux

Rust version

1.75

Library version

2.0.2

Small code error in Readme.md, in the Usage section

Describe the bug

Location of the error: Readme.md > Usage section.
If you copy/paste the code snippets (create client, create request, send request), the code does not compile , giving the following error:

error[E0308]: mismatched types
   --> src/main.rs:33:36
    |
33  |     let result = client.completion(req)?;
    |                         ---------- ^^^ expected `CompletionRequest`, found `ChatCompletionRequest`
    |                         |
    |                         arguments to this method are incorrect
    |
note: method defined here
   --> /home/lzins/.cargo/registry/src/index.crates.io-6f17d22bba15001f/openai-api-rs-4.0.8/src/v1/api.rs:174:12
    |
174 |     pub fn completion(&self, req: CompletionRequest) -> Result<CompletionResponse, APIError> {
    |            ^^^^^^^^^^

To Reproduce

copy/paste the code snippets of th Usage section of the Readme

Code snippets

No response

OS

windows

Rust version

1.70.0

Library version

4.0.8

Allow to override https://api.openai.com/v1

Describe the feature or improvement you're requesting

I'd like to use openai-api-rs with Local AI

To do that I'll need the ability to specify for example https://localhost:8080/v1 as the API endpoint.

Something like

impl Client {
    ...
    pub fn new_with_endpoint(api_url: String) -> Self {
        Self { 
            api_key: None,
            api_url: Some(api_url) 
        }
    }
    ...
}

Additional context

I'm seeing a few projects now re-implementing the Open AI API spec as a way to plug in different LLM's.

Async support

Describe the feature or improvement you're requesting

Most real-time applications are likely to use Tokio or any other async runtime. The current Client does not support async.

Additional context

No response

Run CI pipeline on PRs always, before merging

Describe the feature or improvement you're requesting

To avoid CI breaking changes getting into main, such as #61 which was caused by #54, the CI pipeline should run on PRs before they can be merged

Additional context

Make it so that this is enforced by GitHub.

Return rate limit information for each completed request

Describe the feature or improvement you're requesting

OpenAI API returns rate limits in headers. This is useful for implementing production apps that must respect the rate limits and introduce strategies like exponential backoff or similar to deal with the limits.

An alternative strategy would be to provide one of the rate limit handling back-off strategies built into the library with the possibility to configure one. That being said, since different apps may want to have different behaviors, returning this information to the user is still crucial to allowing developers to implement the desired behavior.

Additional context

No response

How to do GPT3?

Describe the feature or improvement you're requesting

Tried this, I guess no support yet?

image

Would really love to use GPT3, thanks ♥️

Derive serialize for ChatCompletionResponse

Describe the feature or improvement you're requesting

In 4.0.1 the definition of ChatCompletionResponse is:

#[derive(Debug, Deserialize)]
pub struct ChatCompletionResponse {
    pub id: String,
    pub object: String,
    pub created: i64,
    pub model: String,
    pub choices: Vec<ChatCompletionChoice>,
    pub usage: common::Usage,
    pub system_fingerprint: Option<String>,
}

I would like that in addition to Debug and Deserialize, it would also derive Serialize.

Additional context

This is desirable because deriving Serialize would allow structured logging to serialize to JSON for logging and tracing purposes.

Eq and PartialEq Traits

Describe the feature or improvement you're requesting

Many of the API structs should have Eq, PartialEq traits

example:

#[derive(Debug, Deserialize, Serialize, Clone, PartialEq, Eq)]
#[allow(non_camel_case_types)]
pub enum MessageRole {
    user,
    system,
    assistant,
    function,
}

Additional context

No response

Embedding response error

Describe the bug

the program returns the following error:
Error: APIError { message: "error decoding response body: missing field usage at line 1545 column 5" }

not sure if this is due to change in the API

To Reproduce

first create an instance of a client with your API,
generate a embedding request,
send a embedding request

Code snippets

`let req = EmbeddingRequest {
model: "text-embedding-ada-002".to_string(),
input: "story time".to_string(),
user: Option::None,
}

let result = client.embedding(req).await?;
println!("{:?}, result");

Ok(())
`

OS

macOS

Rust version

Rust v1.64.0

Library version

v0.1.0

Error 404

Describe the bug

When I try to run a completion, I get a 404 error. I am using API of NagaAI - https://api.naga.ac/v1. When I use NagaAI with the official Python library it doesn't make any errors.

To Reproduce

  1. Get a prompt and a client
  2. Run a completion
  3. Return generated text

Code snippets

use openai_api_rs::v1::api::Client;
use openai_api_rs::v1::completion::{self, CompletionRequest, CompletionResponse};
pub fn get_completion(client: Client, prompt: &str) -> String {
    let prompt_string = String::from(prompt);
    let req = CompletionRequest::new(String::from("gpt-3.5-turbo"), prompt_string)
        .max_tokens(3000)
        .temperature(0.9)
        .top_p(1.0)
        .stop(vec![String::from(" Human:"), String::from(" AI:")])
        .presence_penalty(0.6)
        .frequency_penalty(0.0);

    let result = client.completion(req).unwrap(); // Error location
    result.choices[0].text.clone()
}

pub fn client(api: &str) -> Client {
    let api_data = String::from(api);
    const BASE_URL: &str = "https://api.naga.ac/v1";
    let endpoint = std::env::var("OPENAI_API_BASE").unwrap_or_else(|_| BASE_URL.to_owned());
    Client::new_with_endpoint(endpoint, api_data)
}

OS

macOS

Rust version

Rust v1.75.0

Library version

openai-api-rs 4.0.8

Support Proxy

Describe the feature or improvement you're requesting

Send request via http proxy. minreq support proxy

Additional context

No response

Error decoding response body: missing field `id`

Describe the bug

I now have the following code

use openai_api_rs::v1::api::Client;
use openai_api_rs::v1::chat_completion::{self, ChatCompletionRequest};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let client = Client::new_with_endpoint("http://localhost:8080".to_string(), "NOKEY".to_string());
    let req = ChatCompletionRequest {
        model: "ggml-gpt4all-j".to_string(),
        messages: vec![chat_completion::ChatCompletionMessage {
            role: chat_completion::MessageRole::user,
            content: String::from("What is Bitcoin?"),
            name: None,
            function_call: None,
        }],
        functions: None,
        function_call: None,
        temperature: None,
        top_p: None,
        n: None,
        stream: None,
        stop: None,
        max_tokens: None,
        presence_penalty: None,
        frequency_penalty: None,
        logit_bias: None,
        user: None,
    };
    let result = client.chat_completion(req).await?;
    println!("{:?}", result.choices[0].message.content);
    Ok(())
}

I run ggml-gpt4all-j using the following

docker run -p 8080:8080 -it --rm ghcr.io/purton-tech/fine-tuna-model-api

When the call returns I get the following error

Error: APIError { message: "error decoding response body: missing field `id` at line 1 column 549" }

Looks like this line would need to be optional https://github.com/dongri/openai-api-rs/blob/main/src/v1/chat_completion.rs#L85 for local AI compatibility.

To Reproduce

As above.

Code snippets

No response

OS

Any

Rust version

1.64

Library version

1.12

Missing struct option on ChatCompletionRequest function_call type

Describe the bug

From openai docs
If you want to force the model to call a specific function you can do so by setting function_call: {"name": "insert-function-name"}. You can also force the model to generate a user-facing message by setting function_call: "none". Note that the default behavior (function_call: "auto") is for the model to decide on its own whether to call a function and if so which function to call.

The current implementation only supports strings like "auto" or "none"

But the type should be an enum of auto, none and a struct that becomes {"name": "function-name"}

To Reproduce

  1. Create a new ChatCompletionRequest instance.
  2. add an api-valid function_call shape like a struct that deserialises { "name": "function-name" }
  3. Code will not compile because function_call only accepts Option

Code snippets

pub struct ChatCompletionRequest {
    // ... 
    #[serde(skip_serializing_if = "Option::is_none")]
    pub function_call: Option<String>, // can this be an enum of "auto" or { "name": "function-name" } instead 
}

OS

macOS

Rust version

1.72.0

Library version

openai-api-rs v1.0.1

Support variable length embeddings

Describe the feature or improvement you're requesting

From the blog on new embedding models:

Both of our new embedding models were trained with a technique that allows developers to trade-off performance and cost of using embeddings. Specifically, developers can shorten embeddings (i.e. remove some numbers from the end of the sequence) without the embedding losing its concept-representing properties by passing in the dimensions API parameter. For example, on the MTEB benchmark, a text-embedding-3-large embedding can be shortened to a size of 256 while still outperforming an unshortened text-embedding-ada-002 embedding with a size of 1536.

Based on the API reference, this is available via a new parameter on the embedding request called dimensions (optional integer).

Additional context

I'm happy to write a PR for this.

Include Doc rs

Describe the feature or improvement you're requesting

Maybe include the doc rs link in the readme to point towads the doc?
@dongri

Additional context

No response

All objects should Serialize and Deserialize

Describe the feature or improvement you're requesting

All object in the API should support Serialization and Deserialization.

Additional context

Example:

openai_api_rs::v1::chat_completion::ChatCompletionRequest support Serialization, but not Deserialization.

One of my goals with this project is to implement mock services which requires my service handlers to deserialize ChatCompletionRequest object and issue some dummy responses.

The current design does not allow for this.

@dongri - big thanks for the quick turn around on the other issue.

Fields of `Tool` are private

Describe the bug

Recently, functions field was marked as deprecated and we are told to use tools instead.

#54

However, I am not able to construct an instance of Tools because the fields are private and nor is there any Tools::new() function to use nor any ToolBuilder one can use.

To Reproduce

Try to make an instance of Tool from a different crate using this crate.

Code snippets

let req = ChatCompletionRequest {
    // ...
    tools: Some(vec![Tool {
        tool_type: "function".into(),
        function: Function {
            // ...
        } }]),
    tool_choice: Some(ToolChoiceType::ToolChoice { tool: Tool {
        tool_type: "function".into(),
        function: Function {
            // ...
        } }}),
}

Can't do this because the fields in Tool are private.

OS

Any

Rust version

cargo 1.75.0

Library version

openai-api-rs v2.1.7

Support api-key header

Describe the feature or improvement you're requesting

I'm working with an azure openai endpoint that doesn't accept authentication through a Bearer token. I need to pass in an api-key header. I dont know if there's any harm in sending an additional header with the api-key or it should be a different signature.

Additional context

No response

content-type header api error on file_upload function

Describe the bug

I am trying to upload a file using file_upload function but i get error below

APIError { message: "415: {\n  \"error\": {\n    \"message\": \"Invalid Content-Type header (application/json; charset=UTF-8), expected multipart/form-data. (HINT: If you're using curl, you can pass -H 'Content-Type: multipart/form-data')\",\n    \"type\": \"invalid_request_error\",\n    \"param\": null,\n    \"code\": null\n  }\n}\n" }

To Reproduce

try to do a file upload using the file_upload function it gives api error.

I think as the error says we need to pass the correct header for file.. cant be the common header in content-type

Code snippets

client.file_upload(FileUploadRequest::new(
            FILE_PATH.to_string(),
            "assistants".to_string(),
        ))

OS

ubuntu

Rust version

Rust v1.73

Library version

openai-api-rs v2.1.4

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.