Giter Site home page Giter Site logo

aallam / openai-kotlin Goto Github PK

View Code? Open in Web Editor NEW
1.3K 30.0 139.0 12.14 MB

OpenAI API client for Kotlin with multiplatform and coroutines capabilities.

License: MIT License

Kotlin 100.00%
openai kotlin client multiplatform coroutines api chatgpt dall-e llm whisper gpt

openai-kotlin's Introduction

OpenAI API client for Kotlin

Maven Central License Documentation

Kotlin client for OpenAI's API with multiplatform and coroutines capabilities.

📦 Setup

  1. Install OpenAI API Kotlin client by adding the following dependency to your build.gradle file:
repositories {
    mavenCentral()
}

dependencies {
    implementation "com.aallam.openai:openai-client:3.7.1"
}
  1. Choose and add to your dependencies one of Ktor's engines.

BOM

Alternatively, you can use openai-client-bom by adding the following dependency to your build.gradle file

dependencies {
    // import Kotlin API client BOM
    implementation platform('com.aallam.openai:openai-client-bom:3.7.1')

    // define dependencies without versions
    implementation 'com.aallam.openai:openai-client'
    runtimeOnly 'io.ktor:ktor-client-okhttp'
}

Multiplatform

In multiplatform projects, add openai client dependency to commonMain, and choose an engine for each target.

Maven

Gradle is required for multiplatform support, but there's nothing stopping you from using the jvm client in a Maven project. You still need to add to your dependencies one of Ktor's engines.

Setup the client with maven
<dependencies>
    <dependency>
        <groupId>com.aallam.openai</groupId>
        <artifactId>openai-client-jvm</artifactId>
        <version>3.7.1</version>
    </dependency>
            
    <dependency>
        <groupId>io.ktor</groupId>
        <artifactId>ktor-client-okhttp-jvm</artifactId>
        <version>2.3.2</version>
        <scope>runtime</scope>
    </dependency>
</dependencies>

The BOM is not supported for Maven projects.

⚡️ Getting Started

Note

OpenAI encourages using environment variables for the API key. Read more.

Create an instance of OpenAI client:

val openai = OpenAI(
    token = "your-api-key",
    timeout = Timeout(socket = 60.seconds),
    // additional configurations...
)

Or you can create an instance of OpenAI using a pre-configured OpenAIConfig:

val config = OpenAIConfig(
    token = apiKey,
    timeout = Timeout(socket = 60.seconds),
    // additional configurations...
)

val openAI = OpenAI(config)

Use your OpenAI instance to make API requests. Learn more.

Supported features

Beta

Deprecated

Looking for a tokenizer? Try ktoken, a Kotlin library for tokenizing text.

📚 Guides

Get started and understand more about how to use OpenAI API client for Kotlin with these guides:

ℹ️ Sample apps

Sample apps are available under sample, please check the README for running instructions.

🔒 ProGuard / R8

The specific rules are already bundled into the Jar which can be interpreted by R8 automatically.

📸 Snapshots

Snapshot

Learn how to import snapshot version

To import snapshot versions into your project, add the following code snippet to your gradle file:

repositories {
   //...
   maven { url 'https://oss.sonatype.org/content/repositories/snapshots/' }
}

🛠️ Troubleshooting

For common issues and their solutions, check the Troubleshooting Guide.

⭐️ Support

Appreciate the project? Here's how you can help:

  1. Star: Give it a star at the top right. It means a lot!
  2. Contribute: Found an issue or have a feature idea? Submit a PR.
  3. Feedback: Have suggestions? Open an issue or start a discussion.

📄 License

OpenAI Kotlin API Client is an open-sourced software licensed under the MIT license. This is an unofficial library, it is not affiliated with nor endorsed by OpenAI. Contributions are welcome.

openai-kotlin's People

Contributors

aallam avatar ahmedmirza994 avatar andraxdev avatar aqua-ix avatar charlee-dev avatar devsrsouza avatar eliasjorgensen avatar emeasure-github-private avatar filipobornik avatar florentine-doemges avatar gama11 avatar goooler avatar hamen avatar jochengucksnk avatar jtaub avatar matusekma avatar mxwell avatar patricklaflamme avatar rafsanjani avatar rasharab avatar renovate[bot] avatar rjeeb avatar stuie avatar varenytsiamykhailo avatar voqaldev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openai-kotlin's Issues

Unresolved reference: OpenAI in 3.0.0-beta01

image
This is version: 2.1.3
image
This is version:3.0.0-beta01

Most of time, I can't make the interface OpenAI in 3.0.0-beta01 be referenced. But once, it can be reference (and I don't know how it works), another error occurred: Only the interface (not the public fun OpenAI(token: String): OpenAI ) can be call, it makes create an instance for interface impossible.

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Repository problems

These problems occurred while renovating this repository. View logs.

  • WARN: Use matchDepNames instead of matchPackageNames

Pending Approval

These branches will be created by Renovate only once you click their checkbox below.

  • chore(deps): update all dependencies (actions/checkout, actions/setup-java, gradle, gradle/wrapper-validation-action, java, macos, org.jetbrains.dokka, com.diffplug.gradle.spotless, com.vanniktech.maven.publish, org.jetbrains.kotlinx.binary-compatibility-validator, com.aallam.ulid:ulid-kotlin, ch.qos.logback:logback-classic, com.squareup.okio:okio-fakefilesystem, com.squareup.okio:okio-nodefilesystem, com.squareup.okio:okio, io.ktor:ktor-client-darwin, io.ktor:ktor-client-okhttp, io.ktor:ktor-client-mock, io.ktor:ktor-client-jetty, io.ktor:ktor-client-java, io.ktor:ktor-client-cio, io.ktor:ktor-client-apache, io.ktor:ktor-serialization-kotlinx-json, io.ktor:ktor-client-content-negotiation, io.ktor:ktor-client-auth, io.ktor:ktor-client-curl, io.ktor:ktor-client-logging, io.ktor:ktor-client-core, org.jetbrains.kotlinx:kotlinx-serialization-json, org.jetbrains.kotlinx:kotlinx-serialization-core, org.jetbrains.kotlinx:kotlinx-coroutines-test, org.jetbrains.kotlinx:kotlinx-coroutines-core, org.jetbrains.kotlin.plugin.serialization, org.jetbrains.kotlin.multiplatform)

Detected dependencies

asdf
.tool-versions
  • java 11.0.22+7
github-actions
.github/workflows/build.yml
  • actions/checkout v3
  • gradle/wrapper-validation-action v1
  • actions/setup-java v3
  • gradle/gradle-build-action v2
  • actions/checkout v3
  • gradle/wrapper-validation-action v1
  • actions/setup-java v3
  • gradle/gradle-build-action v2
  • actions/checkout v3
  • gradle/wrapper-validation-action v1
  • actions/setup-java v3
  • gradle/gradle-build-action v2
  • macos 11
.github/workflows/publish-snapshot.yml
  • actions/checkout v3
  • actions/checkout v3
  • actions/setup-java v3
  • gradle/gradle-build-action v2
  • macos 11
.github/workflows/publish.yml
  • actions/checkout v3
  • actions/setup-java v3
  • gradle/gradle-build-action v2
  • macos 11
gradle
gradle.properties
settings.gradle.kts
build.gradle.kts
build-support/settings.gradle.kts
build-support/build.gradle.kts
gradle/libs.versions.toml
  • org.jetbrains.kotlinx:kotlinx-coroutines-core 1.7.2
  • org.jetbrains.kotlinx:kotlinx-coroutines-test 1.7.2
  • org.jetbrains.kotlinx:kotlinx-serialization-core 1.5.1
  • org.jetbrains.kotlinx:kotlinx-serialization-json 1.5.1
  • io.ktor:ktor-client-core 2.3.2
  • io.ktor:ktor-client-logging 2.3.2
  • io.ktor:ktor-client-curl 2.3.2
  • io.ktor:ktor-client-auth 2.3.2
  • io.ktor:ktor-client-content-negotiation 2.3.2
  • io.ktor:ktor-serialization-kotlinx-json 2.3.2
  • io.ktor:ktor-client-apache 2.3.2
  • io.ktor:ktor-client-cio 2.3.2
  • io.ktor:ktor-client-java 2.3.2
  • io.ktor:ktor-client-jetty 2.3.2
  • io.ktor:ktor-client-mock 2.3.2
  • io.ktor:ktor-client-okhttp 2.3.2
  • io.ktor:ktor-client-darwin 2.3.2
  • com.squareup.okio:okio 3.4.0
  • com.squareup.okio:okio-nodefilesystem 3.4.0
  • com.squareup.okio:okio-fakefilesystem 3.4.0
  • ch.qos.logback:logback-classic 1.4.8
  • com.aallam.ulid:ulid-kotlin 1.2.1
  • com.aallam.ktoken:ktoken 0.3.0
  • org.jetbrains.kotlin.multiplatform 1.9.0
  • org.jetbrains.kotlin.plugin.serialization 1.9.0
  • org.jetbrains.kotlinx.binary-compatibility-validator 0.13.2
  • com.vanniktech.maven.publish 0.25.3
  • com.diffplug.gradle.spotless 6.20.0
  • org.jetbrains.dokka 1.8.20
openai-client/gradle.properties
openai-client/build.gradle.kts
openai-client-bom/gradle.properties
openai-client-bom/build.gradle.kts
openai-core/gradle.properties
openai-core/build.gradle.kts
sample/js/build.gradle.kts
sample/jvm/build.gradle.kts
sample/native/build.gradle.kts
gradle-wrapper
gradle/wrapper/gradle-wrapper.properties
  • gradle 8.2.1

  • Check this box to trigger a request for Renovate to run again on this repository

3.5 turbo API not responding

I am currently using the 3.5 turbo API and it is not responding with anything, it also does not give me the time-out error. Open AI does not have an official outage record either, could you please confirm that the library is working as expected?

Support http proxy

like this
OpenAIConfig
val proxy:String = null

fun createHttpClient( config: OpenAIConfig ){

return HttpClient{
engine{
proxy = ProxyBuilder.http(config.proxy) //null?
}
...
}
}

[Bug with optimizers]

Description

Mark the class as @serializable or provide the serializer explicitly exception occurs when optimizers and obfuscators are used.

Steps to Reproduce

  1. Compile this API with R8 or ProGuard optimizer
  2. Obfuscate the solution

Environment

  • openai-kotlin version: 3.2.0
  • Kotlin version: 1.7.21
  • OS: Windows 11

Additional Info

Code sample:

val completions: Flow<ChatCompletionChunk> = ai!!.chatCompletions(chatCompletionRequest)

Note

It works fine without optimizers

Stacktrace:

Process: org.teslasoft.assistant, PID: 766
b6.h: Serializer for class 'b' is not found.
Mark the class as @Serializable or provide the serializer explicitly.
  at c4.h.I(Unknown Source:32)
  at kotlinx.coroutines.b0.Y(Unknown Source:25)
  at n7.o.k(Unknown Source:235)
  at n7.j.l(Unknown Source:31)
  at j5.a.h(Unknown Source:8)
  at kotlinx.coroutines.j0.run(Unknown Source:102)
  at android.os.Handler.handleCallback(Handler.java:942)
  at android.os.Handler.dispatchMessage(Handler.java:99)
  at android.os.Looper.loopOnce(Looper.java:201)
  at android.os.Looper.loop(Looper.java:288)
  at android.app.ActivityThread.main(ActivityThread.java:7913)
  at java.lang.reflect.Method.invoke(Native Method)
  at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:548)
  at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:942)
Suppressed: kotlinx.coroutines.h0: [r1{Cancelling}@117e083, Dispatchers.Main]

[error : You didn't provide an API key]

Description

I made everything correctly for chat completion but I still have problem

Steps to Reproduce

My Code is

val chatCompletionRequest = ChatCompletionRequest(
    model = ModelId("gpt-3.5-turbo"),
    messages = listOf(
        ChatMessage(
            role = ChatRole.User,
            content = "Hello!"
        )
    )
)

val openAi = OpenAI(api_key)

GlobalScope.launch {
    val completion: ChatCompletion = openAi.chatCompletion(chatCompletionRequest)
    answer = completion.choices[0].message.toString()
}

I/System.out: HttpClient: REQUEST: https://api.openai.com/v1/chat/completions
I/System.out: METHOD: HttpMethod(value=POST)
I/System.out: COMMON HEADERS
I/System.out: -> Accept: application/json
I/System.out: -> Accept-Charset: UTF-8
I/System.out: -> Authorization: Bearer MY_API_KEY
I/System.out: CONTENT HEADERS
I/System.out: -> Content-Length: 73
I/System.out: -> Content-Type: application/json
D/NetworkSecurityConfig: No Network Security Config specified, using platform default
I/System.out: HttpClient: RESPONSE: 200
I/System.out: METHOD: HttpMethod(value=POST)
I/System.out: FROM: https://api.openai.com/v1/chat/completions
I/System.out: COMMON HEADERS
I/System.out: -> access-control-allow-origin: *
I/System.out: -> alt-svc: h3=":443"; ma=86400, h3-29=":443"; ma=86400
I/System.out: -> cache-control: no-cache, must-revalidate
I/System.out: -> cf-cache-status: DYNAMIC
I/System.out: -> cf-ray: 7b1bd2aeae3db9e1-OTP
I/System.out: -> content-type: application/json
I/System.out: -> date: Sun, 02 Apr 2023 20:29:16 GMT
I/System.out: -> openai-model: gpt-3.5-turbo-0301
I/System.out: -> openai-organization: user-dermqywvibsj8hgo3u9pi910
I/System.out: -> openai-processing-ms: 645
I/System.out: -> openai-version: 2020-10-01
I/System.out: -> server: cloudflare
I/System.out: -> strict-transport-security: max-age=15724800; includeSubDomains
I/System.out: -> x-ratelimit-limit-requests: 20
I/System.out: -> x-ratelimit-remaining-requests: 19
I/System.out: -> x-ratelimit-reset-requests: 3s
I/System.out: -> x-request-id: 0f69a1910c7a3bad98d5ec558bcd0cb4

**My Error Message is :
{
"error": {
"message": "You didn't provide an API key. You need to provide your API key in an Authorization header using Bearer auth (i.e. Authorization: Bearer YOUR_KEY), or as the password field (with blank username) if you're accessing the API from your browser and are prompted for a username and password. You can obtain an API key from https://platform.openai.com/account/api-keys.",
"type": "invalid_request_error",
"param": null,
"code": null
}
}

Environment

  • openai-kotlin version: [3.2.0]
  • Kotlin version: ['1.8.20']
  • OS: [Android 11]

Serializer for class 'ChatCompletionRequest' is not found.

Description

When building a release version for my Android app with R8 i get the error: "Serializer for class 'ChatCompletionRequest' is not found."

Steps to Reproduce

  1. Build release APK
  2. Install APK
  3. Send a chat completion request: public suspend fun chatCompletion(request: ChatCompletionRequest): ChatCompletion
  4. App crashes with this exception:

com.aallam.openai.api.exception.OpenAIHttpException: Serializer for class 'ChatCompletionRequest' is not found. Mark the class as @Serializable or provide the serializer explicitly. at com.aallam.openai.client.internal.http.HttpTransport.handleException(HttpTransport.kt:46) at com.aallam.openai.client.internal.http.HttpTransport.perform(HttpTransport.kt:25) at com.aallam.openai.client.internal.http.HttpTransport$perform$1.invokeSuspend(HttpTransport.kt:0) at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33) at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:104) at androidx.compose.ui.platform.AndroidUiDispatcher.performTrampolineDispatch(AndroidUiDispatcher.android.kt:81) at androidx.compose.ui.platform.AndroidUiDispatcher.access$performTrampolineDispatch(AndroidUiDispatcher.android.kt:41) at androidx.compose.ui.platform.AndroidUiDispatcher$dispatchCallback$1.run(AndroidUiDispatcher.android.kt:57) at android.os.Handler.handleCallback(Handler.java:942) at android.os.Handler.dispatchMessage(Handler.java:99) at android.os.Looper.loopOnce(Looper.java:226) at android.os.Looper.loop(Looper.java:313) at android.app.ActivityThread.main(ActivityThread.java:8757) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:571) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1067) Suppressed: kotlinx.coroutines.DiagnosticCoroutineContextException: [androidx.compose.ui.platform.MotionDurationScaleImpl@a692934, androidx.compose.runtime.BroadcastFrameClock@b8c935d, StandaloneCoroutine{Cancelling}@6af44d2, AndroidUiDispatcher@c0479a3] Caused by: kotlinx.serialization.SerializationException: Serializer for class 'ChatCompletionRequest' is not found. Mark the class as @Serializable or provide the serializer explicitly. at kotlinx.serialization.internal.Platform_commonKt.serializerNotRegistered(Platform.common.kt:91) at kotlinx.serialization.SerializersKt__SerializersKt.serializer(Serializers.kt:149) at kotlinx.serialization.SerializersKt.serializer(Unknown Source:1) at io.ktor.serialization.kotlinx.SerializerLookupKt.guessSerializer(SerializerLookup.kt:44) at io.ktor.serialization.kotlinx.KotlinxSerializationBase.serialize$ktor_serialization_kotlinx(KotlinxSerializationBase.kt:34) at io.ktor.serialization.kotlinx.KotlinxSerializationConverter.serializeNullable(KotlinxSerializationConverter.kt:54) at io.ktor.client.plugins.contentnegotiation.ContentNegotiation.convertRequest$ktor_client_content_negotiation(ContentNegotiation.kt:156) at io.ktor.client.plugins.contentnegotiation.ContentNegotiation$Plugin$install$1.invokeSuspend(ContentNegotiation.kt:202) at io.ktor.client.plugins.contentnegotiation.ContentNegotiation$Plugin$install$1.invoke(ContentNegotiation.kt:0) at io.ktor.client.plugins.contentnegotiation.ContentNegotiation$Plugin$install$1.invoke(ContentNegotiation.kt:0) at io.ktor.util.pipeline.SuspendFunctionGun.loop(SuspendFunctionGun.kt:120) at io.ktor.util.pipeline.SuspendFunctionGun.proceed(SuspendFunctionGun.kt:78) at io.ktor.util.pipeline.SuspendFunctionGun.proceedWith(SuspendFunctionGun.kt:88) at io.ktor.client.plugins.HttpCallValidator$Companion$install$1.invokeSuspend(HttpCallValidator.kt:130) at io.ktor.client.plugins.HttpCallValidator$Companion$install$1.invoke(HttpCallValidator.kt:0) at io.ktor.client.plugins.HttpCallValidator$Companion$install$1.invoke(HttpCallValidator.kt:0) at io.ktor.util.pipeline.SuspendFunctionGun.loop(SuspendFunctionGun.kt:120) at io.ktor.util.pipeline.SuspendFunctionGun.proceed(SuspendFunctionGun.kt:78) at io.ktor.client.plugins.HttpRequestLifecycle$Plugin$install$1.invokeSuspend(HttpRequestLifecycle.kt:38) at io.ktor.client.plugins.HttpRequestLifecycle$Plugin$install$1.invoke(HttpRequestLifecycle.kt:0) 2023-04-04 10:41:06.532 21048-21048 AndroidRuntime com.carosoftware.cortex E at io.ktor.client.plugins.HttpRequestLifecycle$Plugin$install$1.invoke(HttpRequestLifecycle.kt:0) at io.ktor.util.pipeline.SuspendFunctionGun.loop(SuspendFunctionGun.kt:120) at io.ktor.util.pipeline.SuspendFunctionGun.proceed(SuspendFunctionGun.kt:78) at io.ktor.util.pipeline.SuspendFunctionGun.execute$ktor_utils(SuspendFunctionGun.kt:98) at io.ktor.util.pipeline.Pipeline.execute(Pipeline.kt:77) at io.ktor.client.HttpClient.execute$ktor_client_core(HttpClient.kt:191) at io.ktor.client.statement.HttpStatement.executeUnsafe(HttpStatement.kt:108) at io.ktor.client.statement.HttpStatement.execute(HttpStatement.kt:47) at io.ktor.client.statement.HttpStatement.execute(HttpStatement.kt:62) at com.aallam.openai.client.internal.api.ChatApi$chatCompletion$2.invokeSuspend(ChatApi.kt:22) at com.aallam.openai.client.internal.api.ChatApi$chatCompletion$2.invoke(ChatApi.kt:0) at com.aallam.openai.client.internal.api.ChatApi$chatCompletion$2.invoke(ChatApi.kt:0) at com.aallam.openai.client.internal.http.HttpTransport.perform(HttpTransport.kt:22) at com.aallam.openai.client.internal.api.ChatApi.chatCompletion(ChatApi.kt:34) at com.aallam.openai.client.internal.OpenAIApi.chatCompletion(OpenAIApi.kt:0)

Environment

  • openai-kotlin version: [e.g. 3.2.0]
  • Kotlin version: [e.g. 1.8.10]
  • OS: [macOS]

Additional Info

Tried adding the proguard rules in the proguard-rules.pro file like mentioned in the README:

`-keepattributes InnerClasses

-if @kotlinx.serialization.Serializable class
com.aallam.openai.api.**
{
static *$ *;
}
-keepnames class <1>$$serializer {
static <1>$$serializer INSTANCE;
}`

Facing issue in ImageVariation call

I had integrate this library in my project. It is Working fine for content generation("v1/completions") and Image generation("v1/images/generations"). But When I start using ImageVariation ("v1/images/variations"), I had faced issue that I am unable to fix,

D 1-->file:///storage/emulated/0/DCIM/Camera/IMG_20230125_180236381.jpg 2023-01-25 18:02:38.499 8356-8356 hssn com.sparklab.ai D 2-->file:///data/user/0/com.sparklab.ai/files/sparklab/sparklab_hssnmirza.png 2023-01-25 18:02:38.975 8356-8396 OpenGLRenderer com.sparklab.ai D endAllActiveAnimators on 0xb4000076461a44d0 (RippleDrawable) with handle 0xb400007556108220 2023-01-25 18:02:40.404 8356-8583 System.out com.sparklab.ai I HttpClient: REQUEST: https://api.openai.com/v1/images/variations 2023-01-25 18:02:40.404 8356-8583 System.out com.sparklab.ai I METHOD: HttpMethod(value=POST) 2023-01-25 18:02:40.404 8356-8583 System.out com.sparklab.ai I COMMON HEADERS 2023-01-25 18:02:40.404 8356-8583 System.out com.sparklab.ai I -> Accept: application/json 2023-01-25 18:02:40.404 8356-8583 System.out com.sparklab.ai I -> Accept-Charset: UTF-8 2023-01-25 18:02:40.404 8356-8583 System.out com.sparklab.ai I -> Authorization: Bearer <apiKey> 2023-01-25 18:02:40.404 8356-8583 System.out com.sparklab.ai I CONTENT HEADERS 2023-01-25 18:02:40.404 8356-8583 System.out com.sparklab.ai I -> Content-Type: multipart/form-data; boundary=-651c0b63-3b3247d1-5ce2608bf515725-441b30e6-4507d4a361a52508-1fc4734e- 2023-01-25 18:02:40.535 8356-8585 TrafficStats com.sparklab.ai D tagSocket(96) with statsTag=0xffffffff, statsUid=-1 2023-01-25 18:02:42.753 8356-8585 System.out com.sparklab.ai I HttpClient: RESPONSE: 400 BAD REQUEST 2023-01-25 18:02:42.754 8356-8585 System.out com.sparklab.ai I METHOD: HttpMethod(value=POST) 2023-01-25 18:02:42.754 8356-8585 System.out com.sparklab.ai I FROM: https://api.openai.com/v1/images/variations 2023-01-25 18:02:42.754 8356-8585 System.out com.sparklab.ai I COMMON HEADERS 2023-01-25 18:02:42.754 8356-8585 System.out com.sparklab.ai I -> access-control-allow-origin: * 2023-01-25 18:02:42.754 8356-8585 System.out com.sparklab.ai I -> connection: keep-alive 2023-01-25 18:02:42.754 8356-8585 System.out com.sparklab.ai I -> content-length: 161 2023-01-25 18:02:42.754 8356-8585 System.out com.sparklab.ai I -> content-type: application/json 2023-01-25 18:02:42.754 8356-8585 System.out com.sparklab.ai I -> date: Wed, 25 Jan 2023 13:02:40 GMT 2023-01-25 18:02:42.754 8356-8585 System.out com.sparklab.ai I -> openai-organization: user-zinuwdnhn5tnrbfqaxqlrm8u 2023-01-25 18:02:42.754 8356-8585 System.out com.sparklab.ai I -> openai-processing-ms: 147 2023-01-25 18:02:42.754 8356-8585 System.out com.sparklab.ai I -> openai-version: 2020-10-01 2023-01-25 18:02:42.754 8356-8585 System.out com.sparklab.ai I -> strict-transport-security: max-age=15724800; includeSubDomains 2023-01-25 18:02:42.754 8356-8585 System.out com.sparklab.ai I -> x-android-received-millis: 1674651762739 2023-01-25 18:02:42.754 8356-8585 System.out com.sparklab.ai I -> x-android-response-source: NETWORK 400 2023-01-25 18:02:42.754 8356-8585 System.out com.sparklab.ai I -> x-android-selected-protocol: http/1.1 2023-01-25 18:02:42.754 8356-8585 System.out com.sparklab.ai I -> x-android-sent-millis: 1674651761388 --------- beginning of crash 2023-01-25 18:02:42.754 8356-8585 System.out com.sparklab.ai I -> x-request-id: 8ba8f7abb134b74cd3ae5b2d6f41c489 2023-01-25 18:02:42.798 8356-8592 AndroidRuntime com.sparklab.ai E FATAL EXCEPTION: DefaultDispatcher-worker-8 Process: com.sparklab.ai, PID: 8356 com.aallam.openai.api.exception.OpenAIHttpException at com.aallam.openai.client.internal.http.HttpTransport.perform(HttpTransport.kt:24) at com.aallam.openai.client.internal.http.HttpTransport$perform$1.invokeSuspend(Unknown Source:15) at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33) at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106) at kotlinx.coroutines.internal.LimitedDispatcher.run(LimitedDispatcher.kt:42) at kotlinx.coroutines.scheduling.TaskImpl.run(Tasks.kt:95) at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:570) at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:750) at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:677) at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:664) Suppressed: kotlinx.coroutines.DiagnosticCoroutineContextException: [StandaloneCoroutine{Cancelling}@9c8cf2c, Dispatchers.Default] Caused by: com.aallam.openai.api.exception.OpenAIAPIException: (statusCode=400, body='{ "error": { "code": null, "message": "Uploaded image must be a PNG and less than 4 MB.", "param": null, "type": "invalid_request_error" } } ') at com.aallam.openai.client.internal.http.HttpTransport.bodyOrThrow(HttpTransport.kt:32) at com.aallam.openai.client.internal.http.HttpTransport.perform(HttpTransport.kt:22) ... 9 more 2023-01-25 18:02:42.832 8356-8592 Process com.sparklab.ai I Sending signal. PID: 8356 SIG: 9
My Image is already in .png format and size is in KBs. So, Please let me know how can I fix it.

How to handle errors

Hi, Can you tell how to handle errors while using this currently it is providing OpenAIHttpException but it is not providing UnknownHostException when it is occured.

Support retry on 429 Too Many Requests

The new chat API has a rate limit of 20 requests/minute. If this gets overshot, the API returns 429, which fails the request. Ktor has a plugin to auto-retry these requests (https://ktor.io/docs/client-retry.html), so the request succeeds, it just takes longer. The internal HttpClient is not exposed, though, so it has to be implemented on the library side.

[Suggestion]

Suggestion related to Serializable classes

This suggestion is related to the issue #147

I suggest you to add id 'kotlinx-serialization' to build.gradle because sometimes a serialization exception may still occur when extremely aggressive optimizers are used (I compressed my app from 20MB to only 2 MB).

All my problems are gone when i add

id 'kotlinx-serialization'

to app module build.gradle (can be added to another modules too) and

id 'org.jetbrains.kotlin.plugin.serialization' version '1.6.21'

to project build.gradle

Maybe it will help if someones also have problems related to optimizers.

Include cosine_similarity

It is not a kind of an issue, but the discussion directs to including a cosine_similarity utility function.

I guess it could be a helpful addition in the embeddings module.

What do you think Aallam?

I can't import openai-kotlin in latest kotlin version

image
Sorry for this stupid question. I have search it on google, and I find nothing useful solutions.
I have already update my kotlin plugin latest version.
How can I solve this problem.
Thanks for your fervent help!

TranscriptionRequest responseFormat always text

Hi, as I understand we can request a responseFormat from TranscriptionRequest . But whether I use json, or verbose_json transcription.text is always int text format. I can see json respone in the body when I enable logging

Question is how to get json formatted version of the response?

Also, when I use srt or vtt I get below error. I can see actual response in the log output but looks like ktor does not know how to handle it.

com.aallam.openai.api.exception.OpenAIHttpException
at com.aallam.openai.client.internal.http.HttpTransport.perform(HttpTransport.kt:23)
at com.aallam.openai.client.internal.http.HttpTransport$perform$1.invokeSuspend(HttpTransport.kt:0)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at kotlinx.coroutines.EventLoopImplBase.processNextEvent(EventLoop.common.kt:284)
at kotlinx.coroutines.BlockingCoroutine.joinBlocking(Builders.kt:85)
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking(Builders.kt:59)
at kotlinx.coroutines.BuildersKt.runBlocking(Unknown Source:1)
at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking$default(Builders.kt:38)
at kotlinx.coroutines.BuildersKt.runBlocking$default(Unknown Source:1)
at com.test.client.openai.conector.OpenAiWhisperConnector.upload(OpenAiWhisperConnector.kt:43)
at com.test.client.openai.OpenAiWhisperClient.upload(OpenAiWhisperClient.kt:21)

Streaming is blocking

Here is a super simple example of streaming completion responses

suspend fun main() {
val s = System.currentTimeMillis()
println("======= ${(System.currentTimeMillis()-s)/1000}")
val h: Flow = openAI.completions(CompletionRequest(text, "Count to 100. Produce output in chunks of 10", 2000))
h.collect{
println("======= ${(System.currentTimeMillis()-s)/1000}")
//print(it.choices[0].text)
}
}

The output is

======= 0
======= 16
======= 16
======= 16
======= 16
======= 16
======= 16
======= 16
======= 16
======= 16
======= 16
======= 16
======= 16
======= 16
.....

You see that it takes a long time for the output to appear and then it all comes in one go.

Here is a streaming version I hacked up

suspend fun main() {
val s = System.currentTimeMillis()
println("======= ${(System.currentTimeMillis()-s)/1000}")
val h: Flow = openAI.streaming(CompletionRequest(text, "Count to 100. Produce output in chunks of 10", 2000))
h.collect{
println("======= ${(System.currentTimeMillis()-s)/1000}")
//print(it.choices[0].text)
}
}

which immediately starts to return results

======= 0
======= 1
======= 1
======= 1
======= 1
======= 1
======= 1
======= 1
======= 1
======= 1
======= 1
======= 1
======= 1
======= 1
======= 1
======= 1
======= 1
======= 1
======= 2
======= 2
======= 2
======= 2
....

Since you make a lot of your classes etc private and internal, I had to do cruel and unnatural things using reflection to create an extension method. But the gist of it is to use non-blocking calls like

client.preparePost { ...}.execute{ httpResponse -> httpResponse.body()..consumeEachBufferRange { buffer: ByteBuffer, _ -> ...}}

I was not able to find any implementation of SSE for Ktor client.

Help me, i can not catch error and it still crash app

  override suspend fun doWork(): Result {
    var chatAI: ChatAI? = null
    try{
      if(openAI == null){
        chatAI  = ChatAI(context)
        chatAI.getDataFromSharedPref(context)
        openAI = chatAI.getApiKey()?.let { OpenAI(it) }
        println("> Getting available engines...")
        openAI?.models()?.forEach(::println)
        turbo = chatAI.getModelId()?.let { ModelId(it) }?.let { openAI?.model(modelId = it) }
      }
      //Init shared pref

      val text = workerParams.inputData.getString(DATA_TEXT) ?: return Result.failure()
      println("\n> Create chat completions...")
      val userToken = UserToken(context)
      var maxToken = (userToken.token - (Until.countTokens(text) + TOLERANCE))
      if(maxToken < 1){
        maxToken = 1
      }
      if(maxToken > chatAI?.getMaxTokenModel()!!){
        maxToken = chatAI.getMaxTokenModel()!! - (Until.countTokens(text) + TOLERANCE)
      }
      println("Max token: $maxToken")
      val chatCompletionRequest = turbo?.let {
        ChatCompletionRequest(
          model = it.id,
          messages = listOf(
            ChatMessage(
              role = ChatRole.User,
              content = text
            )
          ),
          maxTokens = maxToken
        )
      }
//      openAI?.chatCompletion(chatCompletionRequest)?.choices?.forEach(::println)
      println("\n>️ Creating chat completions stream...")
      val response = StringBuilder()
      if (chatCompletionRequest != null) {
        openAI?.chatCompletions(chatCompletionRequest)
          ?.onEach {
            response.append(it.choices.first().delta?.content.orEmpty())
            println("Response Raw: ${it.choices.first().delta?.content.orEmpty()}")
            saveLiveData(context, response.toString().trim())
            data.postValue(response.toString().trim())
            println("Token ${it.usage.toString()}")
          }
          ?.onCompletion {
            println()
          }
          ?.launchIn(this)
          ?.join()
      }
      return  Result.success(Data.Builder().putString(DATA_SUCCESS, response.toString().trim()).build())
    }catch (e : OpenAIException){
      return  Result.failure(Data.Builder().putString(DATA_FAILURE, e.toString().trim()).build())
    }catch (e : OpenAIHttpException){
      return  Result.failure(Data.Builder().putString(DATA_FAILURE, e.toString().trim()).build())
    }catch (e : OpenAIAPIException){
      return  Result.failure(Data.Builder().putString(DATA_FAILURE, e.toString().trim()).build())
    }catch (e : Exception){
      return  Result.failure(Data.Builder().putString(DATA_FAILURE, e.toString().trim()).build())
    } catch (e: AndroidRuntimeException) {
      return  Result.failure(Data.Builder().putString(DATA_FAILURE, e.toString().trim()).build())
    }

please make host part of OpenAIConfig

Hi,

Open AI is starting to deploy environments to Azure. Would be great to have host part of OpenAIConfig, so createHttpClient can take from config, rather than hardcoded. I couldn't see an easy way to override as many classes are marked as internal.

Thankyou.

parameter user

class ChatCompletionRequest need parameter user
It is an official parameter that we can use to facilitate the management of sub-user sessions

New model gpt-3.5-turbo error

throw this error: (statusCode=404, body='{
"error": {
"message": "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?",
"type": "invalid_request_error",
"param": "model",
"code": null
}
}

Does ChatGPT remember previous answer when using this API?

My code

val chatCompletionRequest = ChatCompletionRequest(
    model = ModelId("gpt-3.5-turbo-0301"),
    messages = listOf(
        ChatMessage(
            role = ChatRole.User,
            content = request
        )
    )
)

try {
    val completion: ChatCompletion = ai!!.chatCompletion(chatCompletionRequest)

    /* other code */

} catch (e: Exception) {
    /* other code */
}

Steps to reproduce:
Try to ask a question related to the previous question.

Screenshoot:
photo_2023-03-06_22-05-06

*** Maybe I'm doing something wrong? ***

Error in generating images

Hi i am using the below code to generate a list of images and getting httpExceptionError.

@OptIn(BetaOpenAI::class)
    override suspend fun generateImage(requestModel: RequestModel): GeneratedImage {
        val openAi = OpenAI(Constants.TOKEN)
        try{
            val images = openAi.imageJSON(
                creation = ImageCreation(
                    requestModel.prompt,
                    requestModel.n,
                    ImageSize.is1024x1024,
                )
            )
        }catch (ex:Exception){
            Log.d("requesttModell", "generateImage: exception $ex")
        }

        return remoteService.generateImage(requestModel)
    }
    
 

The Error is generateImage: exception com.aallam.openai.api.exception.OpenAIHttpException

Kotlin Multiplatform dependency resolution iOS simulator

I'm trying out your library in a multiplatform project, but while syncing Gradle keeps complaining for the iOS simulator target:

:shared:iosSimulatorArm64Main: Could not resolve com.aallam.openai:openai-client:2.1.2.
Required by:
    project :shared

My source set configuration looks like this:

    sourceSets {
        val commonMain by getting {
            dependencies {
                implementation(libs.ktor.client.core)
                implementation(libs.kotlinx.coroutines.core)
                implementation(libs.openai.client)
            }
        }
        val commonTest by getting {
            dependencies {
                implementation(kotlin("test"))
            }
        }
        val androidMain by getting {
            dependencies {
                implementation(libs.ktor.client.okhttp)
            }
        }
        val androidTest by getting
        val iosX64Main by getting
        val iosArm64Main by getting
        val iosSimulatorArm64Main by getting
        val iosMain by creating {
            dependsOn(commonMain)
            iosX64Main.dependsOn(this)
            iosArm64Main.dependsOn(this)
            iosSimulatorArm64Main.dependsOn(this)
            dependencies {
                implementation(libs.ktor.client.darwin)
            }
        }
        val iosX64Test by getting
        val iosArm64Test by getting
        val iosSimulatorArm64Test by getting
        val iosTest by creating {
            dependsOn(commonTest)
            iosX64Test.dependsOn(this)
            iosArm64Test.dependsOn(this)
            iosSimulatorArm64Test.dependsOn(this)
        }
    }

And here are the relevant sections of the version catalog:

[versions]
ktor = "2.2.2"
coroutines = "1.6.4"
openai = "2.1.2"

[libraries]
ktor-client-core = { module = "io.ktor:ktor-client-core", version.ref = "ktor" }
ktor-client-okhttp = { module = "io.ktor:ktor-client-okhttp", version.ref = "ktor" }
ktor-client-darwin = { module = "io.ktor:ktor-client-darwin", version.ref = "ktor" }
kotlinx-coroutines-core = { module = "org.jetbrains.kotlinx:kotlinx-coroutines-core", version.ref = "coroutines" }
kotlinx-coroutines-android = { module = "org.jetbrains.kotlinx:kotlinx-coroutines-android", version.ref = "coroutines" }
openai-client = { module = "com.aallam.openai:openai-client", version.ref = "openai" }

Runtime Error while initialising OpenAI

Caused by: java.lang.IllegalStateException: Failed to find HTTP client engine implementation in the classpath: consider adding client engine dependency. See https://ktor.io/docs/http-client-engines.html
at io.ktor.client.HttpClientJvmKt.(HttpClientJvm.kt:43)

Environment

Android 13 phone
implementations:
compose,
hilt,
openai-client,
default implementations

Additional Info

So when trying to initialise OpenAi i got error. In the internet some people write that the problem is with ktor version. Could you fix it, please?

com.aallam.openai.api.exception.OpenAITimeoutException: Connect timeout has expired [url=https://api.openai.com/v1/chat/completions, connect_timeout=unknown ms]

Environment:

  • openai-kotlin version: [3.2.0]
  • Kotlin version: ['1.7.10']
  • engine:[ktor-client-okhttp:2.2.4]

dependencies:

  • implementation("com.aallam.openai:openai-client:3.2.0")
  • implementation("io.ktor:ktor-client-okhttp:2.2.4")

my code is :

    val openAI = OpenAI(OpenAIConfig("XXXX", timeout = Timeout(socket = 60.seconds)))
    val chatCompletionRequest = ChatCompletionRequest(
            model = ModelId("gpt-3.5-turbo"),
            messages = listOf(
                    ChatMessage(
                            role = ChatRole.User,
                            content = "Hello!"
                    )
            )
    )
    val completion: ChatCompletion = openAI.chatCompletion(chatCompletionRequest)
    val answer = completion.choices[0].message.toString()

my error message is:

Exception in thread "main" com.aallam.openai.api.exception.OpenAITimeoutException: Connect timeout has expired [url=https://api.openai.com/v1/chat/completions, connect_timeout=unknown ms]
	at com.aallam.openai.client.internal.http.HttpTransport.handleException(HttpTransport.kt:45)
	at com.aallam.openai.client.internal.http.HttpTransport.perform(HttpTransport.kt:25)
	at com.aallam.openai.client.internal.http.HttpTransport$perform$1.invokeSuspend(HttpTransport.kt)
	at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
	at io.ktor.util.pipeline.SuspendFunctionGun.resumeRootWith(SuspendFunctionGun.kt:138)
	at io.ktor.util.pipeline.SuspendFunctionGun.access$resumeRootWith(SuspendFunctionGun.kt:11)
	at io.ktor.util.pipeline.SuspendFunctionGun$continuation$1.resumeWith(SuspendFunctionGun.kt:55)
	at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:46)
	at io.ktor.util.pipeline.SuspendFunctionGun.resumeRootWith(SuspendFunctionGun.kt:138)
	at io.ktor.util.pipeline.SuspendFunctionGun.access$resumeRootWith(SuspendFunctionGun.kt:11)
	at io.ktor.util.pipeline.SuspendFunctionGun$continuation$1.resumeWith(SuspendFunctionGun.kt:55)
	at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:46)
	at io.ktor.util.pipeline.SuspendFunctionGun.resumeRootWith(SuspendFunctionGun.kt:138)
	at io.ktor.util.pipeline.SuspendFunctionGun.access$resumeRootWith(SuspendFunctionGun.kt:11)
	at io.ktor.util.pipeline.SuspendFunctionGun$continuation$1.resumeWith(SuspendFunctionGun.kt:55)
	at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:46)
	at io.ktor.util.pipeline.SuspendFunctionGun.resumeRootWith(SuspendFunctionGun.kt:138)
	at io.ktor.util.pipeline.SuspendFunctionGun.access$resumeRootWith(SuspendFunctionGun.kt:11)
	at io.ktor.util.pipeline.SuspendFunctionGun$continuation$1.resumeWith(SuspendFunctionGun.kt:55)
	at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:46)
	at io.ktor.util.pipeline.SuspendFunctionGun.resumeRootWith(SuspendFunctionGun.kt:138)
	at io.ktor.util.pipeline.SuspendFunctionGun.access$resumeRootWith(SuspendFunctionGun.kt:11)
	at io.ktor.util.pipeline.SuspendFunctionGun$continuation$1.resumeWith(SuspendFunctionGun.kt:55)
	at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:46)
	at io.ktor.util.pipeline.SuspendFunctionGun.resumeRootWith(SuspendFunctionGun.kt:138)
	at io.ktor.util.pipeline.SuspendFunctionGun.access$resumeRootWith(SuspendFunctionGun.kt:11)
	at io.ktor.util.pipeline.SuspendFunctionGun$continuation$1.resumeWith(SuspendFunctionGun.kt:55)
	at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:46)
	at io.ktor.util.pipeline.SuspendFunctionGun.resumeRootWith(SuspendFunctionGun.kt:138)
	at io.ktor.util.pipeline.SuspendFunctionGun.access$resumeRootWith(SuspendFunctionGun.kt:11)
	at io.ktor.util.pipeline.SuspendFunctionGun$continuation$1.resumeWith(SuspendFunctionGun.kt:55)
	at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:46)
	at kotlinx.coroutines.DispatchedTaskKt.resume(DispatchedTask.kt:178)
	at kotlinx.coroutines.DispatchedTaskKt.dispatch(DispatchedTask.kt:166)
	at kotlinx.coroutines.CancellableContinuationImpl.dispatchResume(CancellableContinuationImpl.kt:397)
	at kotlinx.coroutines.CancellableContinuationImpl.resumeImpl(CancellableContinuationImpl.kt:431)
	at kotlinx.coroutines.CancellableContinuationImpl.resumeImpl$default(CancellableContinuationImpl.kt:420)
	at kotlinx.coroutines.CancellableContinuationImpl.resumeWith(CancellableContinuationImpl.kt:328)
	at kotlinx.coroutines.ResumeAwaitOnCompletion.invoke(JobSupport.kt:1409)
	at kotlinx.coroutines.JobSupport.notifyCompletion(JobSupport.kt:1520)
	at kotlinx.coroutines.JobSupport.completeStateFinalization(JobSupport.kt:323)
	at kotlinx.coroutines.JobSupport.finalizeFinishingState(JobSupport.kt:240)
	at kotlinx.coroutines.JobSupport.tryMakeCompletingSlowPath(JobSupport.kt:906)
	at kotlinx.coroutines.JobSupport.tryMakeCompleting(JobSupport.kt:863)
	at kotlinx.coroutines.JobSupport.makeCompletingOnce$kotlinx_coroutines_core(JobSupport.kt:828)
	at kotlinx.coroutines.AbstractCoroutine.resumeWith(AbstractCoroutine.kt:100)
	at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:46)
	at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:104)
	at kotlinx.coroutines.internal.LimitedDispatcher.run(LimitedDispatcher.kt:42)
	at kotlinx.coroutines.scheduling.TaskImpl.run(Tasks.kt:95)
	at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:570)
	at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:750)
	at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:677)
	at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:664)
Caused by: io.ktor.client.network.sockets.ConnectTimeoutException: Connect timeout has expired [url=https://api.openai.com/v1/chat/completions, connect_timeout=unknown ms]
	at io.ktor.client.plugins.HttpTimeoutKt.ConnectTimeoutException(HttpTimeout.kt:213)
	at io.ktor.client.engine.okhttp.OkUtilsKt.mapOkHttpException(OkUtils.kt:78)
	at io.ktor.client.engine.okhttp.OkUtilsKt.access$mapOkHttpException(OkUtils.kt:1)
	at io.ktor.client.engine.okhttp.OkHttpCallback.onFailure(OkUtils.kt:39)
	at okhttp3.internal.connection.RealCall$AsyncCall.run(RealCall.kt:525)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.net.SocketTimeoutException: Connect timed out
	at java.base/sun.nio.ch.NioSocketImpl.timedFinishConnect(NioSocketImpl.java:546)
	at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:597)
	at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:327)
	at java.base/java.net.Socket.connect(Socket.java:633)
	at okhttp3.internal.platform.Platform.connectSocket(Platform.kt:128)
	at okhttp3.internal.connection.RealConnection.connectSocket(RealConnection.kt:295)
	at okhttp3.internal.connection.RealConnection.connect(RealConnection.kt:207)
	at okhttp3.internal.connection.ExchangeFinder.findConnection(ExchangeFinder.kt:226)
	at okhttp3.internal.connection.ExchangeFinder.findHealthyConnection(ExchangeFinder.kt:106)
	at okhttp3.internal.connection.ExchangeFinder.find(ExchangeFinder.kt:74)
	at okhttp3.internal.connection.RealCall.initExchange$okhttp(RealCall.kt:255)
	at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.kt:32)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
	at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.kt:95)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
	at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.kt:83)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
	at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.kt:76)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
	at okhttp3.internal.connection.RealCall.getResponseWithInterceptorChain$okhttp(RealCall.kt:201)
	at okhttp3.internal.connection.RealCall$AsyncCall.run(RealCall.kt:517)
	... 3 more

First of all, I suspect there is a problem with my network, but I use a browser to access the API:https://api.openai.com/v1/chat/completions, no timeout occur . The reponse received by the browser :

{
    "error": {
        "message": "You didn't provide an API key. You need to provide your API key in an Authorization header using Bearer auth (i.e. Authorization: Bearer YOUR_KEY), or as the password field (with blank username) if you're accessing the API from your browser and are prompted for a username and password. You can obtain an API key from https://platform.openai.com/account/api-keys.",
        "type": "invalid_request_error",
        "param": null,
        "code": null
    }
}

Android/iOS support

First off, awesome library! Super useful

Would it be possible to add Android and iOS support? According to the KMP metadata file, it only targets the following:

  • JVM
  • JS
  • linux_x64
  • macos_arm64
  • macos_x64
  • mingw_x64

SocketTimeoutException raised too frequently

Recently, This err is shown very frequently.
and i modified timeout params in config class as 30secs(default) to 60secs

Very short message from AI is shown, but if answer is long, failed everytime as below.

I'm not sure that is openai's problem or this app's

ERROR: com.aallam.openai.api.exception.OpenAIHttpException

HttpClient: REQUEST https://api.openai.com/v1/chat/completions failed with exception: io.ktor.client.network.sockets.SocketTimeoutException: Socket timeout has expired [url=https://api.openai.com/v1/chat/completions, socket_timeout=unknown] ms

[pool-1-thread-1] TRACE i.k.c.plugins.HttpCallValidator - Processing exception io.ktor.client.network.sockets.SocketTimeoutException: Socket timeout has expired [url=https://api.openai.com/v1/chat/completions, socket_timeout=unknown] ms for request https://api.openai.com/v1/chat/completions

How to get the Usage info from the response ?

In the completion response, there is object that list used token

 "usage": {
        "prompt_tokens": 216,
        "completion_tokens": 795,
        "total_tokens": 1011
    }

But is not found in the TextCompletion Class.

Please add all fields as per the latest response

....

Awesome work. Thanks

Using Kotlin DSLs for creating CompletionRequest, ImageCreationURL, ...

As I mentioned on Twitter, your design is close to what I would have done, except that I like DSL-style builders instead of constructors.

It does require a bit of boilerplate code, but I think the experience for the user is nicer.

Would you be open to something like that? Note that it is completely backwards compatible, i.e. you can still write the old constructor code if you want to.

import FooFactory.Companion.Foo

data class Foo(val bar: String, val baz: String)

class FooFactory {
lateinit var bar: String ; lateinit var baz: String
private fun build(init: FooFactory.()->Unit): Foo { this.init(); return Foo(bar, baz) }
companion object { fun Foo(init: FooFactory.()->Unit): Foo = FooFactory().build(init) }
}

fun main() {

val foo1 = Foo {
    bar = "Bar"
    println("hi")
    baz = "Baz"
}

val foo2 = Foo (
    bar = "Bar",
    baz = "Baz",
)

println(foo1 == foo2)

}

// hi
// true

is it possible to get an api key?

Hi, I'm in a project and I'm using this openai-kotlin client. Is it possible to send a request with user and password and get the user api key or token with that?

So the user would introduce user and password to use my app, instead of finding the api key manually.

I can do this sending a request with okhttp for example but it would be more comfortable to do it with the client.

When use Flow , the Usage is always null

This is my code;

        get("/chatGptWeb/shortMemory") {
            val chatCompletions = openAi.chatCompletions(
                ChatCompletionRequest(
                    ModelId("gpt-3.5-turbo"),
                    listOf(ChatMessage(ChatRole.User, "你好,请进行自我介绍"))
                )
            )
            call.respondTextWriter {
                writer {
                    chatCompletions.collect {
                        println(it)   //LOOK  HERE
                        val content = it.choices[0].delta?.content
                        if (content != null) {
                            write(content)
                            flush()
                        }
                    }
                }.join()
            }
        }

i print the response:

ChatCompletionChunk(id=chatcmpl-6ydTV8XlVBjHh0nt7s51DzDiCzCml, created=1679909441, model=ModelId(id=gpt-3.5-turbo-0301), choices=[ChatChunk(index=0, delta=ChatDelta(role=ChatRole(role=assistant), content=null, name=null), finishReason=null)], usage=null)
ChatCompletionChunk(id=chatcmpl-6ydTV8XlVBjHh0nt7s51DzDiCzCml, created=1679909441, model=ModelId(id=gpt-3.5-turbo-0301), choices=[ChatChunk(index=0, delta=ChatDelta(role=null, content=你, name=null), finishReason=null)], usage=null)
ChatCompletionChunk(id=chatcmpl-6ydTV8XlVBjHh0nt7s51DzDiCzCml, created=1679909441, model=ModelId(id=gpt-3.5-turbo-0301), choices=[ChatChunk(index=0, delta=ChatDelta(role=null, content=好, name=null), finishReason=null)], usage=null)
ChatCompletionChunk(id=chatcmpl-6ydTV8XlVBjHh0nt7s51DzDiCzCml, created=1679909441, model=ModelId(id=gpt-3.5-turbo-0301), choices=[ChatChunk(index=0, delta=ChatDelta(role=null, content=,, name=null), finishReason=null)], usage=null)
.
.
.
ChatCompletionChunk(id=chatcmpl-6ydTV8XlVBjHh0nt7s51DzDiCzCml, created=1679909441, model=ModelId(id=gpt-3.5-turbo-0301), choices=[ChatChunk(index=0, delta=ChatDelta(role=null, content=助, name=null), finishReason=null)], usage=null)
ChatCompletionChunk(id=chatcmpl-6ydTV8XlVBjHh0nt7s51DzDiCzCml, created=1679909441, model=ModelId(id=gpt-3.5-turbo-0301), choices=[ChatChunk(index=0, delta=ChatDelta(role=null, content=。, name=null), finishReason=null)], usage=null)
ChatCompletionChunk(id=chatcmpl-6ydTV8XlVBjHh0nt7s51DzDiCzCml, created=1679909441, model=ModelId(id=gpt-3.5-turbo-0301), choices=[ChatChunk(index=0, delta=ChatDelta(role=null, content=null, name=null), finishReason=stop)], usage=null)

i can get role , content, finishReason, but the usage is always null!

Help me, Error after publish app to Google Play Store

Everything are okay, and then I was publish app into Googleplay Store, now it not working, it can not send message and it toast "w4.d", this is log

HttpClient: RESPONSE https://api.openai.com/v1/models failed with exception: cg.k: Serializer for class 'a' is not found.
Mark the class as @Serializable or provide the serializer explicitly

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.