Giter Site home page Giter Site logo

openai-php / client Goto Github PK

View Code? Open in Web Editor NEW
4.5K 82.0 457.0 2.09 MB

⚡️ OpenAI PHP is a supercharged community-maintained PHP API client that allows you to interact with OpenAI API.

License: MIT License

PHP 100.00%
api client codex gpt-3 language natural openai php processing sdk

client's Introduction

OpenAI PHP

GitHub Workflow Status (main) Total Downloads Latest Version License


OpenAI PHP is a community-maintained PHP API client that allows you to interact with the Open AI API. If you or your business relies on this package, it's important to support the developers who have contributed their time and effort to create and maintain this valuable tool:

Looking for Assistants v2 support?

Check out the 0.10.x release (beta)

Table of Contents

Get Started

Requires PHP 8.1+

First, install OpenAI via the Composer package manager:

composer require openai-php/client

Ensure that the php-http/discovery composer plugin is allowed to run or install a client manually if your project does not already have a PSR-18 client integrated.

composer require guzzlehttp/guzzle

Then, interact with OpenAI's API:

$yourApiKey = getenv('YOUR_API_KEY');
$client = OpenAI::client($yourApiKey);

$result = $client->chat()->create([
    'model' => 'gpt-4',
    'messages' => [
        ['role' => 'user', 'content' => 'Hello!'],
    ],
]);

echo $result->choices[0]->message->content; // Hello! How can I assist you today?

If necessary, it is possible to configure and create a separate client.

$yourApiKey = getenv('YOUR_API_KEY');

$client = OpenAI::factory()
    ->withApiKey($yourApiKey)
    ->withOrganization('your-organization') // default: null
    ->withBaseUri('openai.example.com/v1') // default: api.openai.com/v1
    ->withHttpClient($client = new \GuzzleHttp\Client([])) // default: HTTP client found using PSR-18 HTTP Client Discovery
    ->withHttpHeader('X-My-Header', 'foo')
    ->withQueryParam('my-param', 'bar')
    ->withStreamHandler(fn (RequestInterface $request): ResponseInterface => $client->send($request, [
        'stream' => true // Allows to provide a custom stream handler for the http client.
    ]))
    ->make();

Usage

Models Resource

list

Lists the currently available models, and provides basic information about each one such as the owner and availability.

$response = $client->models()->list();

$response->object; // 'list'

foreach ($response->data as $result) {
    $result->id; // 'gpt-3.5-turbo-instruct'
    $result->object; // 'model'
    // ...
}

$response->toArray(); // ['object' => 'list', 'data' => [...]]

retrieve

Retrieves a model instance, providing basic information about the model such as the owner and permissioning.

$response = $client->models()->retrieve('gpt-3.5-turbo-instruct');

$response->id; // 'gpt-3.5-turbo-instruct'
$response->object; // 'model'
$response->created; // 1642018370
$response->ownedBy; // 'openai'

$response->toArray(); // ['id' => 'gpt-3.5-turbo-instruct', ...]

delete

Delete a fine-tuned model.

$response = $client->models()->delete('curie:ft-acmeco-2021-03-03-21-44-20');

$response->id; // 'curie:ft-acmeco-2021-03-03-21-44-20'
$response->object; // 'model'
$response->deleted; // true

$response->toArray(); // ['id' => 'curie:ft-acmeco-2021-03-03-21-44-20', ...]

Completions Resource

create

Creates a completion for the provided prompt and parameters.

$response = $client->completions()->create([
    'model' => 'gpt-3.5-turbo-instruct',
    'prompt' => 'Say this is a test',
    'max_tokens' => 6,
    'temperature' => 0
]);

$response->id; // 'cmpl-uqkvlQyYK7bGYrRHQ0eXlWi7'
$response->object; // 'text_completion'
$response->created; // 1589478378
$response->model; // 'gpt-3.5-turbo-instruct'

foreach ($response->choices as $result) {
    $result->text; // '\n\nThis is a test'
    $result->index; // 0
    $result->logprobs; // null
    $result->finishReason; // 'length' or null
}

$response->usage->promptTokens; // 5,
$response->usage->completionTokens; // 6,
$response->usage->totalTokens; // 11

$response->toArray(); // ['id' => 'cmpl-uqkvlQyYK7bGYrRHQ0eXlWi7', ...]

create streamed

Creates a streamed completion for the provided prompt and parameters.

$stream = $client->completions()->createStreamed([
        'model' => 'gpt-3.5-turbo-instruct',
        'prompt' => 'Hi',
        'max_tokens' => 10,
    ]);

foreach($stream as $response){
    $response->choices[0]->text;
}
// 1. iteration => 'I'
// 2. iteration => ' am'
// 3. iteration => ' very'
// 4. iteration => ' excited'
// ...

Chat Resource

create

Creates a completion for the chat message.

$response = $client->chat()->create([
    'model' => 'gpt-3.5-turbo',
    'messages' => [
        ['role' => 'user', 'content' => 'Hello!'],
    ],
]);

$response->id; // 'chatcmpl-6pMyfj1HF4QXnfvjtfzvufZSQq6Eq'
$response->object; // 'chat.completion'
$response->created; // 1677701073
$response->model; // 'gpt-3.5-turbo-0301'

foreach ($response->choices as $result) {
    $result->index; // 0
    $result->message->role; // 'assistant'
    $result->message->content; // '\n\nHello there! How can I assist you today?'
    $result->finishReason; // 'stop'
}

$response->usage->promptTokens; // 9,
$response->usage->completionTokens; // 12,
$response->usage->totalTokens; // 21

$response->toArray(); // ['id' => 'chatcmpl-6pMyfj1HF4QXnfvjtfzvufZSQq6Eq', ...]

Creates a completion for the chat message with a tool call.

$response = $client->chat()->create([
    'model' => 'gpt-3.5-turbo-0613',
    'messages' => [
        ['role' => 'user', 'content' => 'What\'s the weather like in Boston?'],
    ],
    'tools' => [
        [
            'type' => 'function',
            'function' => [
                'name' => 'get_current_weather',
                'description' => 'Get the current weather in a given location',
                'parameters' => [
                    'type' => 'object',
                    'properties' => [
                        'location' => [
                            'type' => 'string',
                            'description' => 'The city and state, e.g. San Francisco, CA',
                        ],
                        'unit' => [
                            'type' => 'string',
                            'enum' => ['celsius', 'fahrenheit']
                        ],
                    ],
                    'required' => ['location'],
                ],
            ],
        ]
    ]
]);

$response->id; // 'chatcmpl-6pMyfj1HF4QXnfvjtfzvufZSQq6Eq'
$response->object; // 'chat.completion'
$response->created; // 1677701073
$response->model; // 'gpt-3.5-turbo-0613'

foreach ($response->choices as $result) {
    $result->index; // 0
    $result->message->role; // 'assistant'
    $result->message->content; // null
    $result->message->toolCalls[0]->id; // 'call_123'
    $result->message->toolCalls[0]->type; // 'function'
    $result->message->toolCalls[0]->function->name; // 'get_current_weather'
    $result->message->toolCalls[0]->function->arguments; // "{\n  \"location\": \"Boston, MA\"\n}"
    $result->finishReason; // 'tool_calls'
}

$response->usage->promptTokens; // 82,
$response->usage->completionTokens; // 18,
$response->usage->totalTokens; // 100

Creates a completion for the chat message with a function call.

$response = $client->chat()->create([
    'model' => 'gpt-3.5-turbo-0613',
    'messages' => [
        ['role' => 'user', 'content' => 'What\'s the weather like in Boston?'],
    ],
    'functions' => [
        [
            'name' => 'get_current_weather',
            'description' => 'Get the current weather in a given location',
            'parameters' => [
                'type' => 'object',
                'properties' => [
                    'location' => [
                        'type' => 'string',
                        'description' => 'The city and state, e.g. San Francisco, CA',
                    ],
                    'unit' => [
                        'type' => 'string',
                        'enum' => ['celsius', 'fahrenheit']
                    ],
                ],
                'required' => ['location'],
            ],
        ]
    ]
]);

$response->id; // 'chatcmpl-6pMyfj1HF4QXnfvjtfzvufZSQq6Eq'
$response->object; // 'chat.completion'
$response->created; // 1677701073
$response->model; // 'gpt-3.5-turbo-0613'

foreach ($response->choices as $result) {
    $result->index; // 0
    $result->message->role; // 'assistant'
    $result->message->content; // null
    $result->message->functionCall->name; // 'get_current_weather'
    $result->message->functionCall->arguments; // "{\n  \"location\": \"Boston, MA\"\n}"
    $result->finishReason; // 'function_call'
}

$response->usage->promptTokens; // 82,
$response->usage->completionTokens; // 18,
$response->usage->totalTokens; // 100

create streamed

Creates a streamed completion for the chat message.

$stream = $client->chat()->createStreamed([
    'model' => 'gpt-4',
    'messages' => [
        ['role' => 'user', 'content' => 'Hello!'],
    ],
]);

foreach($stream as $response){
    $response->choices[0]->toArray();
}
// 1. iteration => ['index' => 0, 'delta' => ['role' => 'assistant'], 'finish_reason' => null]
// 2. iteration => ['index' => 0, 'delta' => ['content' => 'Hello'], 'finish_reason' => null]
// 3. iteration => ['index' => 0, 'delta' => ['content' => '!'], 'finish_reason' => null]
// ...

Audio Resource

speech

Generates audio from the input text.

$client->audio()->speech([
    'model' => 'tts-1',
    'input' => 'The quick brown fox jumped over the lazy dog.',
    'voice' => 'alloy',
]); // audio file content as string

speechStreamed

Generates streamed audio from the input text.

$stream = $client->audio()->speechStreamed([
    'model' => 'tts-1',
    'input' => 'The quick brown fox jumped over the lazy dog.',
    'voice' => 'alloy',
]);

foreach($stream as $chunk){
    $chunk; // chunk of audio file content as string
}

transcribe

Transcribes audio into the input language.

$response = $client->audio()->transcribe([
    'model' => 'whisper-1',
    'file' => fopen('audio.mp3', 'r'),
    'response_format' => 'verbose_json',
    'timestamp_granularities' => ['segment', 'word']
]);

$response->task; // 'transcribe'
$response->language; // 'english'
$response->duration; // 2.95
$response->text; // 'Hello, how are you?'

foreach ($response->segments as $segment) {
    $segment->index; // 0
    $segment->seek; // 0
    $segment->start; // 0.0
    $segment->end; // 4.0
    $segment->text; // 'Hello, how are you?'
    $segment->tokens; // [50364, 2425, 11, 577, 366, 291, 30, 50564]
    $segment->temperature; // 0.0
    $segment->avgLogprob; // -0.45045216878255206
    $segment->compressionRatio; // 0.7037037037037037
    $segment->noSpeechProb; // 0.1076972484588623
    $segment->transient; // false
}

foreach ($response->words as $word) {
    $word->word; // 'Hello'
    $word->start; // 0.31
    $word->end; // 0.92
}

$response->toArray(); // ['task' => 'transcribe', ...]

translate

Translates audio into English.

$response = $client->audio()->translate([
    'model' => 'whisper-1',
    'file' => fopen('german.mp3', 'r'),
    'response_format' => 'verbose_json',
]);

$response->task; // 'translate'
$response->language; // 'english'
$response->duration; // 2.95
$response->text; // 'Hello, how are you?'

foreach ($response->segments as $segment) {
    $segment->index; // 0
    $segment->seek; // 0
    $segment->start; // 0.0
    $segment->end; // 4.0
    $segment->text; // 'Hello, how are you?'
    $segment->tokens; // [50364, 2425, 11, 577, 366, 291, 30, 50564]
    $segment->temperature; // 0.0
    $segment->avgLogprob; // -0.45045216878255206
    $segment->compressionRatio; // 0.7037037037037037
    $segment->noSpeechProb; // 0.1076972484588623
    $segment->transient; // false
}

$response->toArray(); // ['task' => 'translate', ...]

Embeddings Resource

create

Creates an embedding vector representing the input text.

$response = $client->embeddings()->create([
    'model' => 'text-similarity-babbage-001',
    'input' => 'The food was delicious and the waiter...',
]);

$response->object; // 'list'

foreach ($response->embeddings as $embedding) {
    $embedding->object; // 'embedding'
    $embedding->embedding; // [0.018990106880664825, -0.0073809814639389515, ...]
    $embedding->index; // 0
}

$response->usage->promptTokens; // 8,
$response->usage->totalTokens; // 8

$response->toArray(); // ['data' => [...], ...]

Files Resource

list

Returns a list of files that belong to the user's organization.

$response = $client->files()->list();

$response->object; // 'list'

foreach ($response->data as $result) {
    $result->id; // 'file-XjGxS3KTG0uNmNOK362iJua3'
    $result->object; // 'file'
    // ...
}

$response->toArray(); // ['object' => 'list', 'data' => [...]]

delete

Delete a file.

$response = $client->files()->delete($file);

$response->id; // 'file-XjGxS3KTG0uNmNOK362iJua3'
$response->object; // 'file'
$response->deleted; // true

$response->toArray(); // ['id' => 'file-XjGxS3KTG0uNmNOK362iJua3', ...]

retrieve

Returns information about a specific file.

$response = $client->files()->retrieve('file-XjGxS3KTG0uNmNOK362iJua3');

$response->id; // 'file-XjGxS3KTG0uNmNOK362iJua3'
$response->object; // 'file'
$response->bytes; // 140
$response->createdAt; // 1613779657
$response->filename; // 'mydata.jsonl'
$response->purpose; // 'fine-tune'
$response->status; // 'succeeded'
$response->status_details; // null

$response->toArray(); // ['id' => 'file-XjGxS3KTG0uNmNOK362iJua3', ...]

upload

Upload a file that contains document(s) to be used across various endpoints/features.

$response = $client->files()->upload([
        'purpose' => 'fine-tune',
        'file' => fopen('my-file.jsonl', 'r'),
    ]);

$response->id; // 'file-XjGxS3KTG0uNmNOK362iJua3'
$response->object; // 'file'
$response->bytes; // 140
$response->createdAt; // 1613779657
$response->filename; // 'mydata.jsonl'
$response->purpose; // 'fine-tune'
$response->status; // 'succeeded'
$response->status_details; // null

$response->toArray(); // ['id' => 'file-XjGxS3KTG0uNmNOK362iJua3', ...]

download

Returns the contents of the specified file.

$client->files()->download($file); // '{"prompt": "<prompt text>", ...'

FineTuning Resource

create job

Creates a job that fine-tunes a specified model from a given dataset.

$response = $client->fineTuning()->createJob([
    'training_file' => 'file-abc123',
    'validation_file' => null,
    'model' => 'gpt-3.5-turbo',
    'hyperparameters' => [
        'n_epochs' => 4,
    ],
    'suffix' => null,
]);

$response->id; // 'ftjob-AF1WoRqd3aJAHsqc9NY7iL8F'
$response->object; // 'fine_tuning.job'
$response->model; // 'gpt-3.5-turbo-0613'
$response->fineTunedModel; // null
// ...

$response->toArray(); // ['id' => 'ftjob-AF1WoRqd3aJAHsqc9NY7iL8F', ...]

list jobs

List your organization's fine-tuning jobs.

$response = $client->fineTuning()->listJobs();

$response->object; // 'list'

foreach ($response->data as $result) {
    $result->id; // 'ftjob-AF1WoRqd3aJAHsqc9NY7iL8F'
    $result->object; // 'fine_tuning.job'
    // ...
}

$response->toArray(); // ['object' => 'list', 'data' => [...]]

You can pass additional parameters to the listJobs method to narrow down the results.

$response = $client->fineTuning()->listJobs([
    'limit' => 3, // Number of jobs to retrieve (Default: 20)
    'after' => 'ftjob-AF1WoRqd3aJAHsqc9NY7iL8F', // Identifier for the last job from the previous pagination request.
]);

retrieve job

Get info about a fine-tuning job.

$response = $client->fineTuning()->retrieveJob('ftjob-AF1WoRqd3aJAHsqc9NY7iL8F');

$response->id; // 'ftjob-AF1WoRqd3aJAHsqc9NY7iL8F'
$response->object; // 'fine_tuning.job'
$response->model; // 'gpt-3.5-turbo-0613'
$response->createdAt; // 1614807352
$response->finishedAt; // 1692819450
$response->fineTunedModel; // 'ft:gpt-3.5-turbo-0613:jwe-dev::7qnxQ0sQ'
$response->organizationId; // 'org-jwe45798ASN82s'
$response->resultFiles[0]; // 'file-1bl05WrhsKDDEdg8XSP617QF'
$response->status; // 'succeeded'
$response->validationFile; // null
$response->trainingFile; // 'file-abc123'
$response->trainedTokens; // 5049

$response->hyperparameters->nEpochs; // 9

$response->toArray(); // ['id' => 'ftjob-AF1WoRqd3aJAHsqc9NY7iL8F', ...]

cancel job

Immediately cancel a fine-tune job.

$response = $client->fineTuning()->cancelJob('ftjob-AF1WoRqd3aJAHsqc9NY7iL8F');

$response->id; // 'ftjob-AF1WoRqd3aJAHsqc9NY7iL8F'
$response->object; // 'fine_tuning.job'
// ...
$response->status; // 'cancelled'
// ...

$response->toArray(); // ['id' => 'ftjob-AF1WoRqd3aJAHsqc9NY7iL8F', ...]

list job events

Get status updates for a fine-tuning job.

$response = $client->fineTuning()->listJobEvents('ftjob-AF1WoRqd3aJAHsqc9NY7iL8F');

$response->object; // 'list'

foreach ($response->data as $result) {
    $result->object; // 'fine_tuning.job.event' 
    $result->createdAt; // 1614807352
    // ...
}

$response->toArray(); // ['object' => 'list', 'data' => [...]]

You can pass additional parameters to the listJobEvents method to narrow down the results.

$response = $client->fineTuning()->listJobEvents('ftjob-AF1WoRqd3aJAHsqc9NY7iL8F', [
    'limit' => 3, // Number of events to retrieve (Default: 20)
    'after' => 'ftevent-kLPSMIcsqshEUEJVOVBVcHlP', // Identifier for the last event from the previous pagination request.
]);

FineTunes Resource (deprecated)

create

Creates a job that fine-tunes a specified model from a given dataset.

$response = $client->fineTunes()->create([
    'training_file' => 'file-ajSREls59WBbvgSzJSVWxMCB',
    'validation_file' => 'file-XjSREls59WBbvgSzJSVWxMCa',
    'model' => 'curie',
    'n_epochs' => 4,
    'batch_size' => null,
    'learning_rate_multiplier' => null,
    'prompt_loss_weight' => 0.01,
    'compute_classification_metrics' => false,
    'classification_n_classes' => null,
    'classification_positive_class' => null,
    'classification_betas' => [],
    'suffix' => null,
]);

$response->id; // 'ft-AF1WoRqd3aJAHsqc9NY7iL8F'
$response->object; // 'fine-tune'
// ...

$response->toArray(); // ['id' => 'ft-AF1WoRqd3aJAHsqc9NY7iL8F', ...]

list

List your organization's fine-tuning jobs.

$response = $client->fineTunes()->list();

$response->object; // 'list'

foreach ($response->data as $result) {
    $result->id; // 'ft-AF1WoRqd3aJAHsqc9NY7iL8F'
    $result->object; // 'fine-tune'
    // ...
}

$response->toArray(); // ['object' => 'list', 'data' => [...]]

retrieve

Gets info about the fine-tune job.

$response = $client->fineTunes()->retrieve('ft-AF1WoRqd3aJAHsqc9NY7iL8F');

$response->id; // 'ft-AF1WoRqd3aJAHsqc9NY7iL8F'
$response->object; // 'fine-tune'
$response->model; // 'curie'
$response->createdAt; // 1614807352
$response->fineTunedModel; // 'curie => ft-acmeco-2021-03-03-21-44-20'
$response->organizationId; // 'org-jwe45798ASN82s'
$response->resultFiles; // [
$response->status; // 'succeeded'
$response->validationFiles; // [
$response->trainingFiles; // [
$response->updatedAt; // 1614807865

foreach ($response->events as $result) {
    $result->object; // 'fine-tune-event' 
    $result->createdAt; // 1614807352
    $result->level; // 'info'
    $result->message; // 'Job enqueued. Waiting for jobs ahead to complete. Queue number =>  0.'
}

$response->hyperparams->batchSize; // 4 
$response->hyperparams->learningRateMultiplier; // 0.1 
$response->hyperparams->nEpochs; // 4 
$response->hyperparams->promptLossWeight; // 0.1

foreach ($response->resultFiles as $result) {
    $result->id; // 'file-XjGxS3KTG0uNmNOK362iJua3'
    $result->object; // 'file'
    $result->bytes; // 140
    $result->createdAt; // 1613779657
    $result->filename; // 'mydata.jsonl'
    $result->purpose; // 'fine-tune'
    $result->status; // 'succeeded'
    $result->status_details; // null
}

foreach ($response->validationFiles as $result) {
    $result->id; // 'file-XjGxS3KTG0uNmNOK362iJua3'
    // ...
}

foreach ($response->trainingFiles as $result) {
    $result->id; // 'file-XjGxS3KTG0uNmNOK362iJua3'
    // ...
}

$response->toArray(); // ['id' => 'ft-AF1WoRqd3aJAHsqc9NY7iL8F', ...]

cancel

Immediately cancel a fine-tune job.

$response = $client->fineTunes()->cancel('ft-AF1WoRqd3aJAHsqc9NY7iL8F');

$response->id; // 'ft-AF1WoRqd3aJAHsqc9NY7iL8F'
$response->object; // 'fine-tune'
// ...
$response->status; // 'cancelled'
// ...

$response->toArray(); // ['id' => 'ft-AF1WoRqd3aJAHsqc9NY7iL8F', ...]

list events

Get fine-grained status updates for a fine-tune job.

$response = $client->fineTunes()->listEvents('ft-AF1WoRqd3aJAHsqc9NY7iL8F');

$response->object; // 'list'

foreach ($response->data as $result) {
    $result->object; // 'fine-tune-event' 
    $result->createdAt; // 1614807352
    // ...
}

$response->toArray(); // ['object' => 'list', 'data' => [...]]

list events streamed

Get streamed fine-grained status updates for a fine-tune job.

$stream = $client->fineTunes()->listEventsStreamed('ft-y3OpNlc8B5qBVGCCVsLZsDST');

foreach($stream as $response){
    $response->message;
}
// 1. iteration => 'Created fine-tune: ft-y3OpNlc8B5qBVGCCVsLZsDST'
// 2. iteration => 'Fine-tune costs $0.00'
// ...
// xx. iteration => 'Uploaded result file: file-ajLKUCMsFPrT633zqwr0eI4l'
// xx. iteration => 'Fine-tune succeeded'

Moderations Resource

create

Classifies if text violates OpenAI's Content Policy.

$response = $client->moderations()->create([
    'model' => 'text-moderation-latest',
    'input' => 'I want to k*** them.',
]);

$response->id; // modr-5xOyuS
$response->model; // text-moderation-003

foreach ($response->results as $result) {
    $result->flagged; // true

    foreach ($result->categories as $category) {
        $category->category->value; // 'violence'
        $category->violated; // true
        $category->score; // 0.97431367635727
    }
}

$response->toArray(); // ['id' => 'modr-5xOyuS', ...]

Images Resource

create

Creates an image given a prompt.

$response = $client->images()->create([
    'model' => 'dall-e-3',
    'prompt' => 'A cute baby sea otter',
    'n' => 1,
    'size' => '1024x1024',
    'response_format' => 'url',
]);

$response->created; // 1589478378

foreach ($response->data as $data) {
    $data->url; // 'https://oaidalleapiprodscus.blob.core.windows.net/private/...'
    $data->b64_json; // null
}

$response->toArray(); // ['created' => 1589478378, data => ['url' => 'https://oaidalleapiprodscus...', ...]]

edit

Creates an edited or extended image given an original image and a prompt.

$response = $client->images()->edit([
    'image' => fopen('image_edit_original.png', 'r'),
    'mask' => fopen('image_edit_mask.png', 'r'),
    'prompt' => 'A sunlit indoor lounge area with a pool containing a flamingo',
    'n' => 1,
    'size' => '256x256',
    'response_format' => 'url',
]);

$response->created; // 1589478378

foreach ($response->data as $data) {
    $data->url; // 'https://oaidalleapiprodscus.blob.core.windows.net/private/...'
    $data->b64_json; // null
}

$response->toArray(); // ['created' => 1589478378, data => ['url' => 'https://oaidalleapiprodscus...', ...]]

variation

Creates a variation of a given image.

$response = $client->images()->variation([
    'image' => fopen('image_edit_original.png', 'r'),
    'n' => 1,
    'size' => '256x256',
    'response_format' => 'url',
]);

$response->created; // 1589478378

foreach ($response->data as $data) {
    $data->url; // 'https://oaidalleapiprodscus.blob.core.windows.net/private/...'
    $data->b64_json; // null
}

$response->toArray(); // ['created' => 1589478378, data => ['url' => 'https://oaidalleapiprodscus...', ...]]

Assistants Resource

Note: If you are creating the client manually from the factory. Make sure you provide the necessary header:

$factory->withHttpHeader('OpenAI-Beta', 'assistants=v1')

create

Create an assistant with a model and instructions.

$response = $client->assistants()->create([
    'instructions' => 'You are a personal math tutor. When asked a question, write and run Python code to answer the question.',
    'name' => 'Math Tutor',
    'tools' => [
        [
            'type' => 'code_interpreter',
        ],
    ],
    'model' => 'gpt-4',
]);

$response->id; // 'asst_gxzBkD1wkKEloYqZ410pT5pd'
$response->object; // 'assistant'
$response->createdAt; // 1623936000
$response->name; // 'Math Tutor'
$response->instructions; // 'You are a personal math tutor. When asked a question, write and run Python code to answer the question.'
$response->model; // 'gpt-4'
$response->description; // null
$response->tools[0]->type; // 'code_interpreter'
$response->fileIds; // []
$response->metadata; // []

$response->toArray(); // ['id' => 'asst_gxzBkD1wkKEloYqZ410pT5pd', ...]

retrieve

Retrieves an assistant.

$response = $client->assistants()->retrieve('asst_gxzBkD1wkKEloYqZ410pT5pd');

$response->id; // 'asst_gxzBkD1wkKEloYqZ410pT5pd'
$response->object; // 'assistant'
$response->createdAt; // 1623936000
$response->name; // 'Math Tutor'
$response->instructions; // 'You are a personal math tutor. When asked a question, write and run Python code to answer the question.'
$response->model; // 'gpt-4'
$response->description; // null
$response->tools[0]->type; // 'code_interpreter'
$response->fileIds; // []
$response->metadata; // []

$response->toArray(); // ['id' => 'asst_gxzBkD1wkKEloYqZ410pT5pd', ...]

modify

Modifies an assistant.

$response = $client->assistants()->modify('asst_gxzBkD1wkKEloYqZ410pT5pd', [
        'name' => 'New Math Tutor',
    ]);

$response->id; // 'asst_gxzBkD1wkKEloYqZ410pT5pd'
$response->object; // 'assistant'
$response->createdAt; // 1623936000
$response->name; // 'New Math Tutor'
$response->instructions; // 'You are a personal math tutor. When asked a question, write and run Python code to answer the question.'
$response->model; // 'gpt-4'
$response->description; // null
$response->tools[0]->type; // 'code_interpreter'
$response->fileIds; // []
$response->metadata; // []

$response->toArray(); // ['id' => 'asst_gxzBkD1wkKEloYqZ410pT5pd', ...]

delete

Delete an assistant.

$response = $client->assistants()->delete('asst_gxzBkD1wkKEloYqZ410pT5pd');

$response->id; // 'asst_gxzBkD1wkKEloYqZ410pT5pd'
$response->object; // 'assistant.deleted'
$response->deleted; // true

$response->toArray(); // ['id' => 'asst_gxzBkD1wkKEloYqZ410pT5pd', ...]

list

Returns a list of assistants.

$response = $client->assistants()->list([
    'limit' => 10,
]);

$response->object; // 'list'
$response->firstId; // 'asst_gxzBkD1wkKEloYqZ410pT5pd'
$response->lastId; // 'asst_reHHtAM0jKLDIxanM6gP6DaR'
$response->hasMore; // true

foreach ($response->data as $result) {
    $result->id; // 'asst_gxzBkD1wkKEloYqZ410pT5pd'
    // ...
}

$response->toArray(); // ['object' => 'list', ...]]

Assistants Files Resource

create

Create an assistant file by attaching a file to an assistant.

$response = $client->assistants()->files()->create('asst_gxzBkD1wkKEloYqZ410pT5pd', [
    'file_id' => 'file-wB6RM6wHdA49HfS2DJ9fEyrH',
]);

$response->id; // 'file-wB6RM6wHdA49HfS2DJ9fEyrH'
$response->object; // 'assistant.file'
$response->createdAt; // 1623936000
$response->assistantId; // 'asst_gxzBkD1wkKEloYqZ410pT5pd'

$response->toArray(); // ['id' => 'file-wB6RM6wHdA49HfS2DJ9fEyrH', ...]

retrieve

Retrieves an AssistantFile.

$response = $client->assistants()->files()->retrieve(
    assistantId: 'asst_gxzBkD1wkKEloYqZ410pT5pd', 
    fileId: 'file-wB6RM6wHdA49HfS2DJ9fEyrH'
);

$response->id; // 'file-wB6RM6wHdA49HfS2DJ9fEyrH'
$response->object; // 'assistant.file'
$response->createdAt; // 1623936000
$response->assistantId; // 'asst_gxzBkD1wkKEloYqZ410pT5pd'

$response->toArray(); // ['id' => 'file-wB6RM6wHdA49HfS2DJ9fEyrH', ...]

delete

Delete an assistant file.

$response = $client->assistants()->files()->delete(
    assistantId: 'asst_gxzBkD1wkKEloYqZ410pT5pd', 
    fileId: 'file-wB6RM6wHdA49HfS2DJ9fEyrH'
);

$response->id; // 'file-wB6RM6wHdA49HfS2DJ9fEyrH'
$response->object; // 'assistant.file.deleted'
$response->deleted; // true

$response->toArray(); // ['id' => 'file-wB6RM6wHdA49HfS2DJ9fEyrH', ...]

list

Returns a list of assistant files.

$response = $client->assistants()->files()->list('asst_gxzBkD1wkKEloYqZ410pT5pd', [
    'limit' => 2,
]);

$response->object; // 'list'
$response->firstId; // 'file-wB6RM6wHdA49HfS2DJ9fEyrH'
$response->lastId; // 'file-6EsV79Y261TEmi0PY5iHbZdS'
$response->hasMore; // true

foreach ($response->data as $result) {
    $result->id; // 'file-wB6RM6wHdA49HfS2DJ9fEyrH'
    // ...
}

$response->toArray(); // ['object' => 'list', ...]

Threads Resource

create

Create a thread.

$response = $client->threads()->create([]);

$response->id; // 'thread_tKFLqzRN9n7MnyKKvc1Q7868'
$response->object; // 'thread'
$response->createdAt; // 1623936000
$response->metadata; // []

$response->toArray(); // ['id' => 'thread_tKFLqzRN9n7MnyKKvc1Q7868', ...]

createAndRun

Create a thread and run it in one request.

$response = $client->threads()->createAndRun(
    [
        'assistant_id' => 'asst_gxzBkD1wkKEloYqZ410pT5pd',
        'thread' => [
            'messages' =>
                [
                    [
                        'role' => 'user',
                        'content' => 'Explain deep learning to a 5 year old.',
                    ],
                ],
        ],
    ],
);

$response->id; // 'run_4RCYyYzX9m41WQicoJtUQAb8'
$response->object; // 'thread.run'
$response->createdAt; // 1623936000
$response->assistantId; // 'asst_gxzBkD1wkKEloYqZ410pT5pd'
$response->threadId; // 'thread_tKFLqzRN9n7MnyKKvc1Q7868'
$response->status; // 'queued'
$response->startedAt; // null
$response->expiresAt; // 1699622335
$response->cancelledAt; // null
$response->failedAt; // null
$response->completedAt; // null
$response->lastError; // null
$response->model; // 'gpt-4'
$response->instructions; // null
$response->tools; // []
$response->fileIds; // []
$response->metadata; // []

$response->toArray(); // ['id' => 'run_4RCYyYzX9m41WQicoJtUQAb8', ...]

retrieve

Retrieves a thread.

$response = $client->threads()->retrieve('thread_tKFLqzRN9n7MnyKKvc1Q7868');

$response->id; // 'thread_tKFLqzRN9n7MnyKKvc1Q7868'
$response->object; // 'thread'
$response->createdAt; // 1623936000
$response->metadata; // []

$response->toArray(); // ['id' => 'thread_tKFLqzRN9n7MnyKKvc1Q7868', ...]

modify

Modifies a thread.

$response = $client->threads()->modify('thread_tKFLqzRN9n7MnyKKvc1Q7868', [
        'metadata' => [
            'name' => 'My new thread name',
        ],
    ]);

$response->id; // 'thread_tKFLqzRN9n7MnyKKvc1Q7868'
$response->object; // 'thread'
$response->createdAt; // 1623936000
$response->metadata; // ['name' => 'My new thread name']

$response->toArray(); // ['id' => 'thread_tKFLqzRN9n7MnyKKvc1Q7868', ...]

delete

Delete a thread.

$response = $client->threads()->delete('thread_tKFLqzRN9n7MnyKKvc1Q7868');

$response->id; // 'thread_tKFLqzRN9n7MnyKKvc1Q7868'
$response->object; // 'thread.deleted'
$response->deleted; // true

$response->toArray(); // ['id' => 'thread_tKFLqzRN9n7MnyKKvc1Q7868', ...]

Threads Messages Resource

create

Create a message.

$response = $client->threads()->messages()->create('thread_tKFLqzRN9n7MnyKKvc1Q7868', [
    'role' => 'user',
    'content' => 'What is the sum of 5 and 7?',
]);

$response->id; // 'msg_SKYwvF3zcigxthfn6F4hnpdU'
$response->object; // 'thread.message'
$response->createdAt; // 1623936000
$response->threadId; // 'thread_tKFLqzRN9n7MnyKKvc1Q7868'
$response->role; // 'user'
$response->content[0]->type; // 'text'
$response->content[0]->text->value; // 'What is the sum of 5 and 7?'
$response->content[0]->text->annotations; // []
$response->assistantId; // null
$response->runId; // null
$response->fileIds; // []
$response->metadata; // []

$response->toArray(); // ['id' => 'msg_SKYwvF3zcigxthfn6F4hnpdU', ...]

retrieve

Retrieve a message.

$response = $client->threads()->messages()->retrieve(
    threadId: 'thread_tKFLqzRN9n7MnyKKvc1Q7868',
    messageId: 'msg_SKYwvF3zcigxthfn6F4hnpdU',
);

$response->id; // 'msg_SKYwvF3zcigxthfn6F4hnpdU'
$response->object; // 'thread.message'
$response->createdAt; // 1623936000
$response->threadId; // 'thread_tKFLqzRN9n7MnyKKvc1Q7868'
$response->role; // 'user'
$response->content[0]->type; // 'text'
$response->content[0]->text->value; // 'What is the sum of 5 and 7?'
$response->content[0]->text->annotations; // []
$response->assistantId; // null
$response->runId; // null
$response->fileIds; // []
$response->metadata; // []

$response->toArray(); // ['id' => 'msg_SKYwvF3zcigxthfn6F4hnpdU', ...]

modify

Modifies a message.

$response = $client->threads()->messages()->modify(
    threadId: 'thread_tKFLqzRN9n7MnyKKvc1Q7868',
    messageId: 'msg_SKYwvF3zcigxthfn6F4hnpdU',
    parameters:  [
        'metadata' => [
            'name' => 'My new message name',
        ],
    ],
);

$response->id; // 'msg_SKYwvF3zcigxthfn6F4hnpdU'
$response->object; // 'thread.message'
$response->createdAt; // 1623936000
$response->threadId; // 'thread_tKFLqzRN9n7MnyKKvc1Q7868'
$response->role; // 'user'
$response->content[0]->type; // 'text'
$response->content[0]->text->value; // 'What is the sum of 5 and 7?'
$response->content[0]->text->annotations; // []
$response->assistantId; // null
$response->runId; // null
$response->fileIds; // []
$response->metadata; // ['name' => 'My new message name']

$response->toArray(); // ['id' => 'msg_SKYwvF3zcigxthfn6F4hnpdU', ...]

list

Returns a list of messages for a given thread.

$response = $client->threads()->messages()->list('thread_tKFLqzRN9n7MnyKKvc1Q7868', [
    'limit' => 10,
]);

$response->object; // 'list'
$response->firstId; // 'msg_SKYwvF3zcigxthfn6F4hnpdU'
$response->lastId; // 'msg_SKYwvF3zcigxthfn6F4hnpdU'
$response->hasMore; // false

foreach ($response->data as $result) {
    $result->id; // 'msg_SKYwvF3zcigxthfn6F4hnpdU'
    // ...
}

$response->toArray(); // ['object' => 'list', ...]]

Threads Messages Files Resource

retrieve

Retrieves a message file.

$response = $client->threads()->messages()->files()->retrieve(
    threadId: 'thread_tKFLqzRN9n7MnyKKvc1Q7868',
    messageId: 'msg_SKYwvF3zcigxthfn6F4hnpdU',
    fileId: 'file-DhxjnFCaSHc4ZELRGKwTMFtI',
);

$response->id; // 'file-DhxjnFCaSHc4ZELRGKwTMFtI'
$response->object; // 'thread.message.file'
$response->createdAt; // 1623936000
$response->threadId; // 'msg_SKYwvF3zcigxthfn6F4hnpdU'

$response->toArray(); // ['id' => 'file-DhxjnFCaSHc4ZELRGKwTMFtI', ...]

list

Returns a list of message files.

$response = $client->threads()->messages()->files()->list(
    threadId: 'thread_tKFLqzRN9n7MnyKKvc1Q7868',
    messageId: 'msg_SKYwvF3zcigxthfn6F4hnpdU',
    parameters: [
        'limit' => 10,
    ],
);

$response->object; // 'list'
$response->firstId; // 'file-DhxjnFCaSHc4ZELRGKwTMFtI'
$response->lastId; // 'file-DhxjnFCaSHc4ZELRGKwTMFtI'
$response->hasMore; // false

foreach ($response->data as $result) {
    $result->id; // 'file-DhxjnFCaSHc4ZELRGKwTMFtI'
    // ...
}

$response->toArray(); // ['object' => 'list', ...]]

Threads Runs Resource

create

Create a run.

$response = $client->threads()->runs()->create(
    threadId: 'thread_tKFLqzRN9n7MnyKKvc1Q7868', 
    parameters: [
        'assistant_id' => 'asst_gxzBkD1wkKEloYqZ410pT5pd',
    ],
);

$response->id; // 'run_4RCYyYzX9m41WQicoJtUQAb8'
$response->object; // 'thread.run'
$response->createdAt; // 1623936000
$response->assistantId; // 'asst_gxzBkD1wkKEloYqZ410pT5pd'
$response->threadId; // 'thread_tKFLqzRN9n7MnyKKvc1Q7868'
$response->status; // 'queued'
$response->startedAt; // null
$response->expiresAt; // 1699622335
$response->cancelledAt; // null
$response->failedAt; // null
$response->completedAt; // null
$response->lastError; // null
$response->model; // 'gpt-4'
$response->instructions; // null
$response->tools[0]->type; // 'code_interpreter'
$response->fileIds; // []
$response->metadata; // []

$response->toArray(); // ['id' => 'run_4RCYyYzX9m41WQicoJtUQAb8', ...]

create streamed

Creates a streamed run.

OpenAI Assistant Events

$stream = $client->threads()->runs()->createStreamed(
    threadId: 'thread_tKFLqzRN9n7MnyKKvc1Q7868',
    parameters: [
        'assistant_id' => 'asst_gxzBkD1wkKEloYqZ410pT5pd',
    ],
);

foreach($stream as $response){
    $response->event // 'thread.run.created' | 'thread.run.in_progress' | .....
    $response->response // ThreadResponse | ThreadRunResponse | ThreadRunStepResponse | ThreadRunStepDeltaResponse | ThreadMessageResponse | ThreadMessageDeltaResponse
}

// ...

create streamed with function calls

Creates a streamed run with function calls

OpenAI Assistant Events

$stream = $client->threads()->runs()->createStreamed(
    threadId: 'thread_tKFLqzRN9n7MnyKKvc1Q7868',
    parameters: [
        'assistant_id' => 'asst_gxzBkD1wkKEloYqZ410pT5pd',
    ],
);


do{
    foreach($stream as $response){
        $response->event // 'thread.run.created' | 'thread.run.in_progress' | .....
        $response->response // ThreadResponse | ThreadRunResponse | ThreadRunStepResponse | ThreadRunStepDeltaResponse | ThreadMessageResponse | ThreadMessageDeltaResponse

        switch($response->event){
            case 'thread.run.created':
            case 'thread.run.queued':
            case 'thread.run.completed':
            case 'thread.run.cancelling':
                $run = $response->response;
                break;
            case 'thread.run.expired':
            case 'thread.run.cancelled':
            case 'thread.run.failed':
                $run = $response->response;
                break 3;
            case 'thread.run.requires_action':
                // Overwrite the stream with the new stream started by submitting the tool outputs
                $stream = $client->threads()->runs()->submitToolOutputsStreamed(
                    threadId: $run->threadId,
                    runId: $run->id,
                    parameters: [
                        'tool_outputs' => [
                            [
                                'tool_call_id' => 'call_KSg14X7kZF2WDzlPhpQ168Mj',
                                'output' => '12',
                            ]
                        ],
                    ]
                );
                break;
        }
    }
} while ($run->status != "completed")

// ...

retrieve

Retrieves a run.

$response = $client->threads()->runs()->retrieve(
    threadId: 'thread_tKFLqzRN9n7MnyKKvc1Q7868',
    runId: 'run_4RCYyYzX9m41WQicoJtUQAb8',
);

$response->id; // 'run_4RCYyYzX9m41WQicoJtUQAb8'
$response->object; // 'thread.run'
$response->createdAt; // 1623936000
$response->assistantId; // 'asst_gxzBkD1wkKEloYqZ410pT5pd'
$response->threadId; // 'thread_tKFLqzRN9n7MnyKKvc1Q7868'
$response->status; // 'queued'
$response->startedAt; // null
$response->expiresAt; // 1699622335
$response->cancelledAt; // null
$response->failedAt; // null
$response->completedAt; // null
$response->lastError; // null
$response->model; // 'gpt-4'
$response->instructions; // null
$response->tools[0]->type; // 'code_interpreter'
$response->fileIds; // []
$response->metadata; // []

$response->usage->promptTokens; // 25,
$response->usage->completionTokens; // 32,
$response->usage->totalTokens; // 57

$response->toArray(); // ['id' => 'run_4RCYyYzX9m41WQicoJtUQAb8', ...]

modify

Modifies a run.

$response = $client->threads()->runs()->modify(
    threadId: 'thread_tKFLqzRN9n7MnyKKvc1Q7868',
    runId: 'run_4RCYyYzX9m41WQicoJtUQAb8',
    parameters:  [
        'metadata' => [
            'name' => 'My new run name',
        ],
    ],
);

$response->id; // 'run_4RCYyYzX9m41WQicoJtUQAb8'
$response->object; // 'thread.run'
$response->createdAt; // 1623936000
$response->assistantId; // 'asst_gxzBkD1wkKEloYqZ410pT5pd'
$response->threadId; // 'thread_tKFLqzRN9n7MnyKKvc1Q7868'
$response->status; // 'queued'
$response->startedAt; // null
$response->expiresAt; // 1699622335
$response->cancelledAt; // null
$response->failedAt; // null
$response->completedAt; // null
$response->lastError; // null
$response->model; // 'gpt-4'
$response->instructions; // null
$response->tools[0]->type; // 'code_interpreter'
$response->fileIds; // []
$response->metadata; // ['name' => 'My new run name']

$response->toArray(); // ['id' => 'run_4RCYyYzX9m41WQicoJtUQAb8', ...]

cancel

Cancels a run that is in_progress.

$response = $client->threads()->runs()->cancel(
    threadId: 'thread_tKFLqzRN9n7MnyKKvc1Q7868',
    runId: 'run_4RCYyYzX9m41WQicoJtUQAb8',
);

$response->id; // 'run_4RCYyYzX9m41WQicoJtUQAb8'
$response->object; // 'thread.run'
$response->createdAt; // 1623936000
$response->assistantId; // 'asst_gxzBkD1wkKEloYqZ410pT5pd'
$response->threadId; // 'thread_tKFLqzRN9n7MnyKKvc1Q7868'
$response->status; // 'cancelling'
$response->startedAt; // null
$response->expiresAt; // 1699622335
$response->cancelledAt; // null
$response->failedAt; // null
$response->completedAt; // null
$response->lastError; // null
$response->model; // 'gpt-4'
$response->instructions; // null
$response->tools[0]->type; // 'code_interpreter'
$response->fileIds; // []
$response->metadata; // []

$response->toArray(); // ['id' => 'run_4RCYyYzX9m41WQicoJtUQAb8', ...]

submitToolOutputs

When a run has the status: requires_action and required_action.type is submit_tool_outputs, this endpoint can be used to submit the outputs from the tool calls once they're all completed. All outputs must be submitted in a single request.

$response = $client->threads()->runs()->submitToolOutputs(
    threadId: 'thread_tKFLqzRN9n7MnyKKvc1Q7868',
    runId: 'run_4RCYyYzX9m41WQicoJtUQAb8',
    parameters: [
        'tool_outputs' => [
            [
                'tool_call_id' => 'call_KSg14X7kZF2WDzlPhpQ168Mj',
                'output' => '12',
            ],
        ],
    ]
);

$response->id; // 'run_4RCYyYzX9m41WQicoJtUQAb8'
$response->object; // 'thread.run'
$response->createdAt; // 1623936000
$response->assistantId; // 'asst_gxzBkD1wkKEloYqZ410pT5pd'
$response->threadId; // 'thread_tKFLqzRN9n7MnyKKvc1Q7868'
$response->status; // 'in_progress'
$response->startedAt; // null
$response->expiresAt; // 1699622335
$response->cancelledAt; // null
$response->failedAt; // null
$response->completedAt; // null
$response->lastError; // null
$response->model; // 'gpt-4'
$response->instructions; // null
$response->tools[0]->type; // 'function'
$response->fileIds; // []
$response->metadata; // []

$response->toArray(); // ['id' => 'run_4RCYyYzX9m41WQicoJtUQAb8', ...]

list

Returns a list of runs belonging to a thread.

$response = $client->threads()->runs()->list(
    threadId: 'thread_tKFLqzRN9n7MnyKKvc1Q7868',
    parameters: [
        'limit' => 10,
    ],
);

$response->object; // 'list'
$response->firstId; // 'run_4RCYyYzX9m41WQicoJtUQAb8'
$response->lastId; // 'run_4RCYyYzX9m41WQicoJtUQAb8'
$response->hasMore; // false

foreach ($response->data as $result) {
    $result->id; // 'run_4RCYyYzX9m41WQicoJtUQAb8'
    // ...
}

$response->toArray(); // ['object' => 'list', ...]]

Threads Runs Steps Resource

retrieve

Retrieves a run step.

$response = $client->threads()->runs()->steps()->retrieve(
    threadId: 'thread_tKFLqzRN9n7MnyKKvc1Q7868',
    runId: 'run_4RCYyYzX9m41WQicoJtUQAb8',
    stepId: 'step_1spQXgbAabXFm1YXrwiGIMUz',
);

$response->id; // 'step_1spQXgbAabXFm1YXrwiGIMUz'
$response->object; // 'thread.run.step'
$response->createdAt; // 1699564106
$response->runId; // 'run_4RCYyYzX9m41WQicoJtUQAb8'
$response->assistantId; // 'asst_gxzBkD1wkKEloYqZ410pT5pd'
$response->threadId; // 'thread_tKFLqzRN9n7MnyKKvc1Q7868'
$response->type; // 'message_creation'
$response->status; // 'completed'
$response->cancelledAt; // null
$response->completedAt; // 1699564119
$response->expiresAt; // null
$response->failedAt; // null
$response->lastError; // null
$response->stepDetails->type; // 'message_creation'
$response->stepDetails->messageCreation->messageId; // 'msg_i404PxKbB92d0JAmdOIcX7vA'

$response->toArray(); // ['id' => 'step_1spQXgbAabXFm1YXrwiGIMUz', ...]

list

Returns a list of run steps belonging to a run.

$response = $client->threads()->runs()->steps()->list(
    threadId: 'thread_tKFLqzRN9n7MnyKKvc1Q7868',
    runId: 'run_4RCYyYzX9m41WQicoJtUQAb8',
    parameters: [
        'limit' => 10,
    ],
);

$response->object; // 'list'
$response->firstId; // 'step_1spQXgbAabXFm1YXrwiGIMUz'
$response->lastId; // 'step_1spQXgbAabXFm1YXrwiGIMUz'
$response->hasMore; // false

foreach ($response->data as $result) {
    $result->id; // 'step_1spQXgbAabXFm1YXrwiGIMUz'
    // ...
}

$response->toArray(); // ['object' => 'list', ...]]

Batches Resource

create

Creates a batch.

$fileResponse = $client->files()->upload(
     parameters: [
          'purpose' => 'batch',
          'file' => fopen('commands.jsonl', 'r'),
    ]
);

$fileId = $fileResponse->id;

$response = $client->batches()->create(
    parameters: [
        'input_file_id' => $fileId,
        'endpoint' => '/v1/chat/completions',
        'completion_window' => '24h'
    ]
 );

$response->id; // 'batch_abc123'
$response->object; // 'batch'
$response->endpoint; // /v1/chat/completions
$response->errors; // null
$response->completionWindow; // '24h'
$response->status; // 'validating'
$response->outputFileId; // null
$response->errorFileId; // null
$response->createdAt; // 1714508499
$response->inProgressAt; // null
$response->expiresAt; // 1714536634
$response->completedAt; // null
$response->failedAt; // null
$response->expiredAt; // null
$response->requestCounts; // null
$response->metadata; // ['name' => 'My batch name']

$response->toArray(); // ['id' => 'batch_abc123', ...]

retrieve

Retrieves a batch.

$response = $client->batches()->retrieve(id: 'batch_abc123');

$response->id; // 'batch_abc123'
$response->object; // 'batch'
$response->endpoint; // /v1/chat/completions
$response->errors; // null
$response->completionWindow; // '24h'
$response->status; // 'validating'
$response->outputFileId; // null
$response->errorFileId; // null
$response->createdAt; // 1714508499
$response->inProgressAt; // null
$response->expiresAt; // 1714536634
$response->completedAt; // null
$response->failedAt; // null
$response->expiredAt; // null
$response->requestCounts->total; // 100
$response->requestCounts->completed; // 95
$response->requestCounts->failed; // 5
$response->metadata; // ['name' => 'My batch name']

$response->toArray(); // ['id' => 'batch_abc123', ...]

cancel

Cancels a batch.

$response = $client->batches()->cancel(id: 'batch_abc123');

$response->id; // 'batch_abc123'
$response->object; // 'batch'
$response->endpoint; // /v1/chat/completions
$response->errors; // null
$response->completionWindow; // '24h'
$response->status; // 'cancelling'
$response->outputFileId; // null
$response->errorFileId; // null
$response->createdAt; // 1711471533
$response->inProgressAt; // 1711471538
$response->expiresAt; // 1711557933
$response->cancellingAt; // 1711475133
$response->cancelledAt; // null
$response->requestCounts->total; // 100
$response->requestCounts->completed; // 23
$response->requestCounts->failed; // 1
$response->metadata; // ['name' => 'My batch name']

$response->toArray(); // ['id' => 'batch_abc123', ...]

list

Returns a list of batches.

$response = $client->batches()->list(
    parameters: [
        'limit' => 10, 
    ],
);

$response->object; // 'list'
$response->firstId; // 'batch_abc123'
$response->lastId; // 'batch_abc456'
$response->hasMore; // true

foreach ($response->data as $result) {
    $result->id; // 'batch_abc123'
    // ...
}

$response->toArray(); // ['object' => 'list', ...]]

Edits Resource (deprecated)

OpenAI has deprecated the Edits API and will stop working by January 4, 2024. https://openai.com/blog/gpt-4-api-general-availability#deprecation-of-the-edits-api

create

Creates a new edit for the provided input, instruction, and parameters.

$response = $client->edits()->create([
    'model' => 'text-davinci-edit-001',
    'input' => 'What day of the wek is it?',
    'instruction' => 'Fix the spelling mistakes',
]);

$response->object; // 'edit'
$response->created; // 1589478378

foreach ($response->choices as $result) {
    $result->text; // 'What day of the week is it?'
    $result->index; // 0
}

$response->usage->promptTokens; // 25,
$response->usage->completionTokens; // 32,
$response->usage->totalTokens; // 57

$response->toArray(); // ['object' => 'edit', ...]

Meta Information

On all response objects you can access the meta information returned by the API via the meta() method.

$response = $client->completions()->create([
    'model' => 'gpt-3.5-turbo-instruct',
    'prompt' => 'Say this is a test',
]);

$meta = $response->meta();

$meta->requestId; // '574a03e2faaf4e9fd703958e4ddc66f5'

$meta->openai->model; // 'gpt-3.5-turbo-instruct'
$meta->openai->organization; // 'org-jwe45798ASN82s'
$meta->openai->version; // '2020-10-01'
$meta->openai->processingMs; // 425

$meta->requestLimit->limit; // 3000
$meta->requestLimit->remaining; // 2999
$meta->requestLimit->reset; // '20ms'

$meta->tokenLimit->limit; // 250000
$meta->tokenLimit->remaining; // 249984
$meta->tokenLimit->reset; // '3ms'

The toArray() method returns the meta information in the form originally returned by the API.

$meta->toArray();

// [ 
//   'x-request-id' => '574a03e2faaf4e9fd703958e4ddc66f5',
//   'openai-model' => 'gpt-3.5-turbo-instruct',
//   'openai-organization' => 'org-jwe45798ASN82s',
//   'openai-processing-ms' => 402,
//   'openai-version' => '2020-10-01',
//   'x-ratelimit-limit-requests' => 3000,
//   'x-ratelimit-remaining-requests' => 2999,
//   'x-ratelimit-reset-requests' => '20ms',
//   'x-ratelimit-limit-tokens' => 250000,
//   'x-ratelimit-remaining-tokens' => 249983,
//   'x-ratelimit-reset-tokens' => '3ms',
// ]

On streaming responses you can access the meta information on the reponse stream object.

$stream = $client->completions()->createStreamed([
    'model' => 'gpt-3.5-turbo-instruct',
    'prompt' => 'Say this is a test',
]);
    
$stream->meta(); 

For further details about the rates limits and what to do if you hit them visit the OpenAI documentation.

Troubleshooting

Timeout

You may run into a timeout when sending requests to the API. The default timeout depends on the HTTP client used.

You can increase the timeout by configuring the HTTP client and passing in to the factory.

This example illustrates how to increase the timeout using Guzzle.

OpenAI::factory()
    ->withApiKey($apiKey)
    ->withOrganization($organization)
    ->withHttpClient(new \GuzzleHttp\Client(['timeout' => $timeout]))
    ->make();

Testing

The package provides a fake implementation of the OpenAI\Client class that allows you to fake the API responses.

To test your code ensure you swap the OpenAI\Client class with the OpenAI\Testing\ClientFake class in your test case.

The fake responses are returned in the order they are provided while creating the fake client.

All responses are having a fake() method that allows you to easily create a response object by only providing the parameters relevant for your test case.

use OpenAI\Testing\ClientFake;
use OpenAI\Responses\Completions\CreateResponse;

$client = new ClientFake([
    CreateResponse::fake([
        'choices' => [
            [
                'text' => 'awesome!',
            ],
        ],
    ]),
]);

$completion = $client->completions()->create([
    'model' => 'gpt-3.5-turbo-instruct',
    'prompt' => 'PHP is ',
]);

expect($completion['choices'][0]['text'])->toBe('awesome!');

In case of a streamed response you can optionally provide a resource holding the fake response data.

use OpenAI\Testing\ClientFake;
use OpenAI\Responses\Chat\CreateStreamedResponse;

$client = new ClientFake([
    CreateStreamedResponse::fake(fopen('file.txt', 'r'););
]);

$completion = $client->chat()->createStreamed([
        'model' => 'gpt-3.5-turbo',
        'messages' => [
            ['role' => 'user', 'content' => 'Hello!'],
        ],
]);

expect($response->getIterator()->current())
        ->id->toBe('chatcmpl-6yo21W6LVo8Tw2yBf7aGf2g17IeIl');

After the requests have been sent there are various methods to ensure that the expected requests were sent:

// assert completion create request was sent
$client->assertSent(Completions::class, function (string $method, array $parameters): bool {
    return $method === 'create' &&
        $parameters['model'] === 'gpt-3.5-turbo-instruct' &&
        $parameters['prompt'] === 'PHP is ';
});
// or
$client->completions()->assertSent(function (string $method, array $parameters): bool {
    // ...
});

// assert 2 completion create requests were sent
$client->assertSent(Completions::class, 2);

// assert no completion create requests were sent
$client->assertNotSent(Completions::class);
// or
$client->completions()->assertNotSent();

// assert no requests were sent
$client->assertNothingSent();

To write tests expecting the API request to fail you can provide a Throwable object as the response.

$client = new ClientFake([
    new \OpenAI\Exceptions\ErrorException([
        'message' => 'The model `gpt-1` does not exist',
        'type' => 'invalid_request_error',
        'code' => null,
    ])
]);

// the `ErrorException` will be thrown
$completion = $client->completions()->create([
    'model' => 'gpt-3.5-turbo-instruct',
    'prompt' => 'PHP is ',
]);

Services

Azure

In order to use the Azure OpenAI Service, it is necessary to construct the client manually using the factory.

$client = OpenAI::factory()
    ->withBaseUri('{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}')
    ->withHttpHeader('api-key', '{your-api-key}')
    ->withQueryParam('api-version', '{version}')
    ->make();

To use Azure, you must deploy a model, identified by the {deployment-id}, which is already incorporated into the API calls. As a result, you do not have to provide the model during the calls since it is included in the BaseUri.

Therefore, a basic sample completion call would be:

$result = $client->completions()->create([
    'prompt' => 'PHP is'
]);

OpenAI PHP is an open-sourced software licensed under the MIT license.

client's People

Contributors

arnebr avatar benbjurstrom avatar bobbrodie avatar careybaird avatar elodieirdor avatar ethanbarlo avatar filipstojkovski13 avatar gehrisandro avatar georgebohnisch avatar godruoyi avatar grahamcampbell avatar gromnan avatar haydar avatar ibotpeaches avatar jeffreyway avatar karlerss avatar knash94 avatar lucianotonet avatar mattsmilin avatar mpociot avatar nunomaduro avatar ordago avatar paulber33 avatar punyflash avatar robbie-thompson avatar ruud68 avatar sandermuller avatar sergiy-petrov avatar shcherbanich avatar trippo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

client's Issues

why not support stream responses?

why not support stream responses? it is so nice to user . it can tell user ai is working .if wait long time user may think ai is not working .

Getting an error when I try to run the chat() method

Thank you for providing this code. When I try ..., I get this error:

Fatal error: Uncaught Error: Call to undefined method OpenAI\Client::chat()

I've been able to run the code snippets fine as I work down the page. I get this error when I try the chat method. Any suggestions? Thanks!

Error Call to undefined method OpenAI\Client::chat()

Hi!!!

I'm doing some tests and for the chat I'm having this error, the others tested resource worked, except the chat.

The code is exactly as in the example:

    $api_key = getenv('OPENAI_KEY');
    $organization = getenv('ORGANIZATION');

    $client = \OpenAI::client($api_key, $organization);

    $response = $client->chat()->create([
        'model' => 'gpt-3.5-turbo',
        'messages' => [
            ['role' => 'user', 'content' => 'Hello!'],
        ],
    ]);

    var_dump($response);

Add Proxy Support for HTTP Request

Can you add support for proxy setting for HTTP Client?for now, classes in openai-php/client are all final,we can't extend them for customize

Symfony Bundle

Hi, thank you for this library that is very easy to use.

I created a Symfony bundle by copying things from the Laravel integration. You can find it here: https://github.com/GromNaN/openai-symfony (work in progress).

What do you think of moving this project into the openai-php organisation? It would be good for this project to provide an integration with a 2nd major framework. The bundle is not published on Packagist yet, so that it would be a clean start.

fully typed responses and requests

Hi @nunomaduro

First of all, thank you for reviewing and merging my previous PRs so quickly! 👍
It's a pleasure to help you with this package. Learned already a lot about (Open)AI and even more from your way how to build a clean package.

I am a huge fan of using fully typed responses and requests. Therefore I gave it a try with the moderations endpoint to see how it could work.

What I ended up with is the following:

$client = OpenAI::client('TOKEN');

$request = new ModerationCreateRequest(input: 'I want to kill them.', model: ModerationModel::TextModerationLatest);

$response = $client->moderations()->create($request);

dump($response->id); // modr-5vvCuUd3dRjgIumIZIu0yBepv5qwL
dump($response->model); // text-moderation-003
dump($response->results[0]->flagged); // true
dump($response->results[0]->categories[0]->toArray()); // ["category" => "hate", "violated" => true, "score" => 0.40681719779968 ]

In my opinion this gives the developers the better UX than plain arrays.

More or less I took the approach Steve McDougall described here: https://laravel-news.com/working-with-data-in-api-integrations

I also implemented request factories to give the user various options how to create the request instance:

// create the request directly
$request = new ModerationCreateRequest(
    input: 'I want to kill them.',
    model: ModerationModel::TextModerationLatest,
);

// pass an array to a factory instance
$request = (new ModerationCreateRequestFactory)->make([
    'input' => 'I want to kill them.',
    'model' => ModerationModel::TextModerationLatest,
]);

// pass an array to a static factory method
$request = ModerationCreateRequestFactory::new([
    'input' => 'I want to kill them.',
    'model' => ModerationModel::TextModerationLatest,
]);

If you want to have a look, I pushed the POC here: https://github.com/gehrisandro/openai-php-client/tree/poc-strong-typed-requests-and-responses

Authorization issue (html 500)

I am trying to call OpenAi in a test website for future projects, but encountered a problem.

On my website I have a button and output field.
when clicking on the button the following code is executed via js:
` <script>
console.log("hello");
window.onload = function() {
document.getElementById("submit-request").addEventListener("click", function() {
var prompt = document.getElementById("prompt-input").value;
var xhr = new XMLHttpRequest();

        xhr.open("GET", "/wp-admin/admin-ajax.php?action=make_request, true);

        xhr.onreadystatechange = function() {
            if (xhr.readyState === 4 && xhr.status === 200) {
                var response = JSON.parse(xhr.responseText);
                document.getElementById("response-output").innerHTML = JSON.stringify(response);
            }
    };
    xhr.send();
});

}
</script>`

The PHP function that is behind the "make_request" is the following:
`
add_action( 'wp_ajax_make_request', 'make_request' );
add_action( 'wp_ajax_nopriv_make_request', 'make_request' );

function make_request() {

$client = OpenAI::client('sk-xxx');

$result = $client->completions()->create([
'model' => 'text-davinci-003',
'prompt' => 'PHP is',
'max_tokens' => 6
]);

echo $result['choices'][0]['text'];

wp_die();
}`

As you can see it is just the basic example from the Readme for the moment. The API key is removed for obvious reasons.
This is the error I got in the webbrowser console:
"GET https://xx.host.com/wp-admin/admin-ajax.php?action=make_request&prompt=kn 500"

What is the problem. Is there some extra authorisation I should do for Openai-php/client?

Timeout with GPT-4, stream = true required

It seems that with GPT-4 it takes too long to receive a response from APIs. In the reference they mention this stream = true to start receiving the first tokens immediately and avoiding a timeout.

Error creating fine tune with default params

I facing a problem with the default params in open ai api fine tunes
Triyng creating a fine tunes with this code:

 $responseFineTuning = $openAIClient->fineTunes()->create([
    'training_file' => 'my_file_id',
    'model' => 'davinci',
]);

result in a trhow

 TypeError 

  OpenAI\Responses\FineTunes\RetrieveResponseHyperparams::__construct(): Argument #1 ($batchSize) must be of type int, null given, called in /code/vendor/openai-php/client/src/Responses/FineTunes/RetrieveResponseHyperparams.php on line 39

  at vendor/openai-php/client/src/Responses/FineTunes/RetrieveResponseHyperparams.php:20
     16▕      * @use ArrayAccessible<array{batch_size: int, learning_rate_multiplier: float, n_epochs: int, prompt_loss_weight: float}>
     17▕      */
     18use ArrayAccessible;
     19▕ 
  ➜  20private function __construct(
     21public readonly int $batchSize,
     22public readonly float $learningRateMultiplier,
     23public readonly int $nEpochs,
     24public readonly float $promptLossWeight,

      +3 vendor frames 
  4   app/App/Console/Commands/GenerateFineTuning.php
      OpenAI\Resources\FineTunes::create(["file-XXXXXXXXXXXXX", "davinci"])

      +13 vendor frames 
  18  artisan:37
      Illuminate\Foundation\Console\Kernel::handle(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))

Looking the api $batchSize is optional param and null by default

image

Parameters in the Fines Tunes Resource

Sorry for my English, but I want to know if $parameters are an array of arrays or what kind of structure I need to use because in the documentation I read that I need a JSONL so I don't know how I can make it here in PHP

$client->fineTunes()->create($parameters);

Thanks

change Endpoint to Azure

Since the open AI API is available in azure, is there a possibility to change to endpoint to azure.
or will there be a plan to add this feature

Answers are truncated

I can't find it in the documentation, maybe I'm missing something. But the answers are truncated. What could be the reason?

Error: NULL finish_reason for completions

Sometimes getting error:

OpenAI\Responses\Completions\CreateResponseChoice::__construct(): Argument #4 ($finishReason) must be of type string, null given, called in /var/www/vendor/openai-php/client/src/Responses/Completions/CreateResponseChoice.php on line 24

$attributes from \OpenAI\Responses\Completions\CreateResponseChoice::from:

array(4) {
  ["text"]=>
  string(972) " ... somthing here ... "
  ["index"]=>
  int(0)
  ["logprobs"]=>
  NULL
  ["finish_reason"]=>
  NULL
}

Thoughts on concurrent/async requests?

Hello!

I was attempting to replace some of the underlying concrete implementations of this project in order to send concurrent API requests to OpenAI to generate multiple completions at once, but due to the architecture of the Resources, they will always make a Request and generate a Response.

For example, 10 synchronous requests to the /completions endpoint with this library can take up to 50 seconds, depending on what's being generated.

I did a basic implementation using Laravel's Http client utilizing pooling (basically Guzzle Async), and I can generate the same 10 completions in ~4-5 seconds.

Any thoughts on adding concurrent/async support in the future, or at least some way of collecting a pool of Requests, so developers could process them on their own?

Pay as you go users can use up to 3000 requests /minute after 48 hours.

Thanks!

Make base URI configurable

Hi,

what do you think about making the base URI configurable? At the moment, the base URI is hardcoded to https://api.openai.com/v1.

Making it configurable would make end-to-end testing of applications using the OpenAI client easier, as one could use an url to a mocking server in the test environment.

I would implement this as a non-breaking change via the following steps:

  1. Extract interface from OpenAI\ValueObjects\Transporter\BaseUri
  2. Change parameter type of $baseUri in OpenAI\ValueObjects\Transporter\Payload::toRequest to extracted interface
  3. Add optional parameter BaseUriInterface $baseUri = null to OpenAI::client
  4. Modify OpenAI::client, so that it handles the default value for $baseUri like that:
public static function client(string $apiToken, string $organization = null, BaseUriInterface $baseUri = null): Client
{
    ...
    $baseUri = $baseUri ?? BaseUri::from('api.openai.com/v1');
    ...
}

What do you think about that? If you don't object, I would implement that.

An issue on Fine Tune List API: when the status_details of a RetrieveResponseFile is an exception message (string)

Hi, I encountered an issue with retrieving the list of fine tunes.

  1. I uploaded a JSONL file
  2. Used the file to create a new fine-tune
  3. When I tried to retrieve the list of fine tunes.
  4. It gave me this error.

I suspect this is because the status details show that the file I uploaded was invalid. However, this is not really an issue with the package. But it would be great if the package could also handle this scenario.

I hope this can be resolved soon. Thanks!

OpenAI\Responses\FineTunes\RetrieveResponseFile::__construct(): Argument #8 ($statusDetails) must be of type ?array, string given, called in /var/www/html/vendor/openai-php/client/src/Responses/FineTunes/RetrieveResponseFile.php on line 50

The exception upon checking:
image

The stack trace I received:
image

Model gpt-3.5-turbo not matching settings in usage report

Hi 👋

I set up the OpenAI client to use the gpt-3.5-turbo model, however, in the usage report, it appears as gpt-3.5-turbo-0301.

image

My configurations are set to use gpt-3.5-turbo:
image

Although they are almost the same, the documentation states the following:
image

In my tests, I noticed that it is really not following the system instructions.

I could not find the responsible part of the code to fix and submit a pull request, so how can we check that?

new chat completion endpoint (api version 1.2.0)

Does this support ChatCompletion? gpt-3.5-turbo

import openai

openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
{"role": "user", "content": "Where was it played?"}
]
)

Undefined array key "events"

I get the following error: Undefined array key "events"

Here is the stack trace:


  Undefined array key "events"

  at vendor/openai-php/client/src/Responses/FineTunes/RetrieveResponse.php:52
     48▕     public static function from(array $attributes): self
     49▕     {
     50▕         $events = array_map(fn (array $result): RetrieveResponseEvent => RetrieveResponseEvent::from(
     51▕             $result
  ➜  52▕         ), $attributes['events']);
     53▕
     54▕         $resultFiles = array_map(fn (array $result): RetrieveResponseFile => RetrieveResponseFile::from(
     55▕             $result
     56▕         ), $attributes['result_files']);

Here is the code I am running: $response = $client->fineTunes()->list();

The problem with KeyAPI.

Hello,
I created and changed my key today and yesterday.. But I have the errors. What can I do?

Fatal error: Uncaught OpenAI\Exceptions\ErrorException: You didn't provide an API key. You need to provide your API key in an Authorization header using Bearer auth (i.e. Authorization: Bearer YOUR_KEY), or as the password field (with blank username) if you're accessing the API from your browser and are prompted for a username and password. You can obtain an API key from https://platform.openai.com/account/api-keys. in D:\OpenServer\domains\localhost\openai\vendor\openai-php\client\src\Transporters\HttpTransporter.php:61 Stack trace: #0 D:\OpenServer\domains\localhost\openai\vendor\openai-php\client\src\Resources\Completions.php(26): OpenAI\Transporters\HttpTransporter->requestObject() #1 D:\OpenServer\domains\localhost\openai\test.php(9): OpenAI\Resources\Completions->create() #2 {main} thrown in D:\OpenServer\domains\localhost\openai\vendor\openai-php\client\src\Transporters\HttpTransporter.php on line 61

HttpTransporter object to return error instead of throwing it.

Hope everyone is doing well.
Currently the architecture of the component is such that the src/Transporters/HttpTransporter.php under its requestObject method, checks for presence of $response['error'], if there's none, it returns the response object (passes it the corresponding CreateResponse class ).
I suggest to redefine the behavior of the transporter so that it will return any possible error and pass it along to the CreateResponse class. I propose the CreateResponse class to process the response attributes, including those with errors and ultimately forward the error details to the application in which it has initiated the API call.
The benefits of my idea:

  1. In line with TDD approach, the developer can easily check for the presence of the error message, statusCode, and deal with them accordingly;
  2. The developer will be able to log those errors easily and use it for overall enhancement and debugging of his host application;
  3. The developer can provide a more user-friendly error message to the end-user based on the returned/received error;
  4. The developer will be able to provide translation of the errors to the end-user.

Currently, I don't see how these 4 things are possible under the currently behavior of the transporter.
The following test can show better what I mean:

public function test_client_handles_error_response_correctly(): void
    {
        $client = OpenAI::client('sk-````');
        $response = $client->completions()->create([
            'prompt' => 'PHP is',
            'model' => 'wrongModel', //invoke error
            'max_tokens' => 20,
            'temperature' => 0,
        ]);
        // Make assertions
        $this->assertNotEmpty($response->error["message"]);
       $this->assertEquals(500, $response->error["status_code"]);
    }

How to remember previous chat when using `gpt-3.5-turbo`?

Hi guys,

I am using gpt-3.5-turbo. Whenever I use this to get an answer, it forgets the previous one.

$response = $client->chat()->create([
    'model' => 'gpt-3.5-turbo',
    'messages' => [
        ['role' => 'user', 'content' => 'Message here'],
    ],
]);

How do I link it to the previous message? Using id from response or something?

Request with context

I could not make a request with the context that preceded the conversation. Is this functionality not implemented, or am I looking at the wrong function? Maybe you don't call it context, but something else?

TypeError when constructing OpenAI response choice object with null finish reason.

TypeError: OpenAI\Responses\Completions\CreateResponseChoice::__construct(): Argument #4 ($finishReason) must be of type string, null given, called in /vendor/openai-php/client/src/Responses/Completions/CreateResponseChoice.php](*/vendor/openai-php/client/src/Responses/Completions/CreateResponseChoice.php) on line 26
  File "/app/Actions/Document/CreateNewContent.php", line 64, in App\Actions\Document\CreateNewContent::complete
    $result = OpenAI::completions()->create($parameters);
  File "/app/Actions/Document/CreateNewContent.php", line 39, in App\Actions\Document\CreateNewContent::App\Actions\Document\{closure}
    return $this->complete($template, $document, $data);
  File "/app/Actions/Document/CreateNewContent.php", line 42, in App\Actions\Document\CreateNewContent::create
    });
  File "/app/Http/Controllers/App/DocumentController.php", line 182, in App\Http\Controllers\App\DocumentController::writeForMe
    $choices = $contentCreator->create($template, $document, $data);
  File "/public/index.php", line 52
    $request = Request::capture()
...
(77 additional frame(s) were not displayed)

Unable to mock anything due to everything being `final`

Wanted to start with thanks a lot for this great package! Really appreciate the work you've done here. ❤️


I'm currently trying to mock responses from OpenAI, however this appears to not be easily done, because everything is marked final, which prevents mocking anything. This makes it a very painful developer experience when testing.

Maybe we can use a factory or something in the OpenAI::client() static method, and remove final on the Client, so it's at least possible to mock the client itself?

final class OpenAIClientFactory
{
    public function make(string $apiToken, string $organization = null): Client
    {
        // ...
    }
}
final class OpenAI
{
    /**
     * Creates a new Open AI Client with the given API token.
     */
    public static function client(string $apiToken, string $organization = null): Client
    {
        return app(OpenAIClientFactory::class)->make($apiToken, $organization);
    }
}
$this->app->bind(OpenAIClientFactory::class);
// TestCase

use OpenAI\Client;

$client = Mockery::mock(Client::class);

app()->bind(OpenAIClientFactory::class, function () use ($client) {
    $mock = Mockery::mock(OpenAIClientFactory::class);

    $mock->shouldRecieve('make')->andReturn($client);

    return $mock;
});

$client->shouldReceive('...')->andReturn('...');

Let me know your thoughts, thanks!

OpenAI::client() return error

Hello, $client = OpenAI::client($yourApiKey); return {"error":"Parse error: syntax error, unexpected '?', expecting function (T_FUNCTION) or const (T_CONST)"}

Any possible reasons? Thanks!

all results truncated

Hi,

All my results to prompts are getting truncated. Typically less than a sentence is returned. Any idea why? Example below:

Any help is much appreciated.

Wyatt

My prompt:

"Write me a story."

Result:

[model] => text-davinci-003
[choices] => Array
(
[0] => OpenAI\Responses\Completions\CreateResponseChoice Object
(
[text] =>

Once upon a time, there was a young girl named Daisy who was
[index] => 0
[logprobs] =>
[finishReason] => length
)

    )

[usage] => OpenAI\Responses\Completions\CreateResponseUsage Object
    (
        [promptTokens] => 4
        [completionTokens] => 16
        [totalTokens] => 20
    )

)

Response error:Your access was terminated due to violation of our policies?

Request

 $client = OpenAI::client($key);
  $result = $client->completions()->create([
      'model' => 'text-davinci-003',
      'prompt' => $input->getArgument('question'),
      "temperature" => 0.7,
      "top_p" => 1,
      "frequency_penalty" => 0,
      "presence_penalty" => 0,
      'max_tokens' => 600,
  ]);

Response

 Your access was terminated due to violation of our policies, please check your email for more information. If you believe this is
   in error and would like to appeal, please contact [email protected].

If the API Key contains a newline, an incorrect error is thrown

I am reading my API key from a file. My editor was adding a newline if the file was open. The result was that any call made via the client would error with the message "you must provide a model parameter", even though a model parameter was being sent.

I fixed my issue by simply trimming the result of my call to get the file's contents but the error message from the API was very confusing. Maybe just add the trim before sending to the API?

Rate limiter

Hi,

Quick question, does this package include a rate limiter or do we need to do it ourselves?

Thanks,

Add Timeout Param

The official Python library allows a timeout to be set on requests. It would be really helpful for production applications to be able to set up a timeout on requests so we don't keep our web workers hanging if there is hiccups in connections or issues on the OpenAI side.

($finishReason) must be of type string

local.ERROR: OpenAI\Responses\Completions\CreateResponseChoice::__construct(): Argument #4 ($finishReason) must be of type string, null given, called in /data1/chatgpt/vendor/openai-php/client/src/Responses/Completions/CreateResponseChoice.php on line 22 {"exception":"[object] (TypeError(code: 0): OpenAI\Responses\Completions\CreateResponseChoice::__construct(): Argument #4 ($finishReason) must be of type string, null given, called in /data1/chatgpt/vendor/openai-php/client/src/Responses/Completions/CreateResponseChoice.php on line 22 at /data1/chatgpt/vendor/openai-php/client/src/Responses/Completions/CreateResponseChoice.php:9)
[stacktrace]

Authorization error, BearerAuthentication

Hello,
I have a persistent error and can't get over it.

require __DIR__ . '/vendor/autoload.php';

use OpenAI\Client;
use OpenAI\Api\Authentication\BearerAuthentication;
use OpenAI\Resources\Completions\Create as CompletionCreate;

$apiKey = 'sk-fEp........';
$client = new Client(new BearerAuthentication($apiKey));

function generateText($client, $model, $prompt, $length, $temperature = 0.5) {
    $response = $client->completions()->create(
        $model,
        (new CompletionCreate())
            ->setPrompt($prompt)
            ->setMaxTokens($length)
            ->setTemperature($temperature)
    );
    return $response->getChoices()[0]->getText();
}


Fatal error: Uncaught Error: Class "OpenAI\Api\Authentication\BearerAuthentication" not found in D:\OpenServer\domains\localhost\openai\test.php:10 Stack trace: #0 {main} thrown in D:\OpenServer\domains\localhost\openai\test.php on line 10

cURL error 60: SSL certificate problem: certificate has expired

Is anyone else having this issue? this is a brand new laravel project on windows, running through php artisan serve im just running the code from the example in the docs.

My code:

Route::get('/', function () {
    $client = OpenAI::client(config('app.open-ai-key'));

    $prompt = <<<TEXT
Extract the requirements for this job offer as a list.
 
"We are seeking a PHP web developer to join our team. The ideal candidate will have experience with PHP, MySQL, HTML, CSS, and JavaScript. They will be responsible for developing and managing web applications and working with a team of developers to create high-quality and innovative software. The salary for this position is negotiable and will be based on experience."
TEXT;

    $result = $client->completions()->create([
        'model' => 'text-davinci-002',
        'prompt' => $prompt,
    ]);

    ray($result);
});

Flare exception:
https://flareapp.io/share/xPQoaD25#F47

PHP 8.1+

Not really an issue - but why does it need PHP 8.1+ to run? would like to use the official client ... anyway, besides that - i really love gpt-3 ...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.