betalgo / openai Goto Github PK
View Code? Open in Web Editor NEWOpenAI .NET sdk - Azure OpenAI, ChatGPT, Whisper, and DALL-E
Home Page: https://betalgo.github.io/openai/
License: MIT License
OpenAI .NET sdk - Azure OpenAI, ChatGPT, Whisper, and DALL-E
Home Page: https://betalgo.github.io/openai/
License: MIT License
Describe the bug
We are not able to specify the chat context
Additional context
The context should be added to be able to use the same chat
Describe the bug
on https://github.com/betalgo/openai/wiki/Chat-GPT has FromAssistance()
, but the method is FromAssistant()
Count this as a vote to add Whisper to the library.
Is your feature request related to a problem? Please describe.
I sometimes choose the wrong model for the type of request I make
Describe the solution you'd like
I would like to type "EditModel." and get only models that are valid for an edit endpoint, or "ImageModel." and get valid Image models, etc.
Describe alternatives you've considered
I see they're labeled, but it'd be nice if it weren't possible to supply an invalid model to my request object.
Additional context
I just started playing with the library today, it's very helpful in general. If this request is approved, I'd be happy to help implement it.
Currently, we have an issue with model(engine) usage. It is preventing to use Fine Tuned Models.
I am planning to fix this next week (16-22 January). Please follow this issue for notifications.
Describe the bug
when I use CreateCompletionAsStream() method,the response not include Usage object,but CreateCompletion() include.
Your code piece
var chatResponse=openAiService.ChatCompletion.CreateCompletionAsStream(new ChatCompletionCreateRequest
{
Messages = new List<ChatMessage>
{
ChatMessage.FromSystem("You are a helpful assistant."),
ChatMessage.FromUser(inputStr),
},
Model = Models.ChatGpt3_5Turbo
});
var tokens=0;
await foreach(var chat in chatResponse){
if (chat.Successful){
tokens+=chat.Usage?.TotalTokens ?? 0;
var choice=chat.Choices.First();
Console.Write(choice.Message.Content);
}
}
Result
chat.Usage is null
Expected behavior
chat.Usage is not null and inclue tokens.
Screenshots
the CreateCompletionAsStream()response
the CreateCompletion() response
Desktop (please complete the following information):
Describe the bug
Send a CreateTranscription with a faulty key
Your code piece
var completionResult = await _openAIService.Audio.CreateTranscription(new AudioCreateTranscriptionRequest
{
Model = Models.WhisperV1,
ResponseFormat = StaticValues.AudioStatics.ResponseFormat.Text,
File = fileBytes,
FileName = inmp3.Name,
});
Result
completionResult.Successful = true
completionResult.Text =
{
"error": {
"message": "Incorrect API key provided: sk-QoVWP***************************************m111. You can find your API key at https://platform.openai.com/account/api-keys.",
"type": "invalid_request_error",
"param": null,
"code": "invalid_api_key"
}
}
Expected behavior
completionResult.Successful should be false
Desktop (please complete the following information):
If use Model.Gpt_4 as Completion Parameter, OpenAI reject my request.
I have to pass the string "gpt-4" instead.
Similar issue for Gpt_4_32k
The copilot proxy exposed by the Github Copilot CLI is https://copilot-proxy.githubusercontent.com/
The completion URI follows the same similar contexts and I imagine it will work fine here:
https://copilot-proxy.githubusercontent.com/v1/engines/copilot-labs-codex/completions
I'm pretty sure it just proxies an Azure ML model based on the headers, so I propose adding a new provider type that is either:
Describe the bug
The Tokenizer Test fails on windows, due to receiving a different amount of tokens than expected (68 instead of 64, like in https://platform.openai.com/tokenizer ). The issue for is that when reading the TokenizerSample.txt, newlines are read as \r\n, resulting in 2 tokens each instead of 1.
Desktop
You spelled the update information wrong in the "readme.md"
"CahtGPT-4 support" => "ChatGPT-4 support".
Hi,
And thanks for sharing your work. Is this support Fine tuning aswell ?
I got following error when i try call await EngineTestHelper.FetchEnginesTest(sdk); method.
I have tested some other methods and they seemed to work but this one is not. I do not know what is the issue.
When calling create completion I get timeout about 50% of the time.
Also no obvious way to change this so every time I am just sitting here waiting for 100 seconds.
Would you pleas help to check the following error? Thanks
Describe the bug
Failed to run RunSimpleCompletionTest()
Got an exception: "| InnerException | {"Authentication failed because the remote party sent a TLS alert: 'HandshakeFailure'."} | System.Exception {System.Security.Authentication.AuthenticationException}"
Your code piece
using ChatGPTClient;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using OpenAI.GPT3.Extensions;
using OpenAI.GPT3.Interfaces;
var builder = new ConfigurationBuilder()
.AddJsonFile("ApiSettings.json").AddUserSecrets<Program>();
IConfiguration configuration = builder.Build();
var serviceCollection = new ServiceCollection();
serviceCollection.AddScoped(_ => configuration);
serviceCollection.AddOpenAIService();
var serviceProvider = serviceCollection.BuildServiceProvider();
var sdk = serviceProvider.GetRequiredService<IOpenAIService>();
await CompletionTestHelper.RunSimpleCompletionTest(sdk);
Console.ReadLine();
Desktop (please complete the following information):
Hi,this is code piece
public async Task GetAnswer(string text)
{
string answer = "";
OpenAIService service = new OpenAIService(new OpenAiOptions() { ApiKey = OpenAIToken });
CompletionCreateRequest createRequest = new CompletionCreateRequest()
{
Prompt = text,
Temperature = 0.3f,
MaxTokens = 1000
};
var res = await service.Completions.CreateCompletion(createRequest, Models.TextDavinciV3);
if (res.Successful)
{
answer = res.Choices.FirstOrDefault().Text;
}
return answer;
}
When I am running it on my server, it returns an error : Authentication failed because the remote party sent a TLS alert: 'HandshakeFailure'.
When I am running it on local pc,it runs ok.
I have checked the setting of TLS between server and local pc, they are the same each other.
btw,I can get normally response from openai.com’s playground on server.
thank you.
Please add best_of
parameter to the ChatCompletionCreateRequest
.
More info: https://platform.openai.com/docs/api-reference/completions/create#completions/create-best_of
best_of
(integer, Optional, Defaults to 1)
Generates best_of completions server-side and returns the "best" (the one with the highest log probability per token). Results cannot be streamed.
When used with n, best_of controls the number of candidate completions and n specifies how many to return – best_of must be greater than n.
Great thanks for the project, I'm trying to use it with Azure OpenAI service, but do not know how to find the DeploymentId, could you please help, thanks
These helper methods are great, but the name of FromAssistance
doesn't fully make sense.
It should be FromAssistant
IMO.
Thanks for the library!
Is your feature request related to a problem? Please describe.
when i use tokenizer to count Chinese character. the result is not correct.
Describe the solution you'd like
correct token count for chinese character.
Describe alternatives you've considered
N/A
Additional context
N/A
Describe the bug
The Http client is not exposed and cannot be configured, so a call to audio transcription with Whisper is very limited to small files even though the service supports up to 1 hour of audio, the call is interrupted after the default 100 seconds which is not enough. Small files are processed correctly though.
Your code piece
var audioCreateTranscriptionResponse = await _iOpenAIService.Audio.CreateTranscription(audioCreateTranscriptionRequest);
Result
Task is cancelled
Expected behavior
To be able to configure the client timeout.
When I include logprobs in a request I get an error:
`var builder = new ConfigurationBuilder()
.AddJsonFile("ApiSettings.json")
.AddUserSecrets();
IConfiguration configuration = builder.Build();
var serviceCollection = new ServiceCollection();
serviceCollection.AddScoped(_ => configuration);
serviceCollection.AddOpenAIService();
var serviceProvider = serviceCollection.BuildServiceProvider();
var sdk = serviceProvider.GetRequiredService();
var openAiService = serviceProvider.GetRequiredService();
var completionResult = await openAiService.Completions.CreateCompletion(new CompletionCreateRequest()
{
Prompt = "I never go to bed after ten o'clock.",
LogProbs = 3,
MaxTokens = 50
},
Models.Davinci);`
ERROR MESSAGE:
{"The JSON value could not be converted to System.Nullable`1[System.Int32]. Path: $.choices[0].logprobs | LineNumber: 0 | BytePositionInLine: 344."}
Hello, I would like to request an addition to TokenizerGpt3.Encode
that would just return the number of the tokens, without creating unnecessary lists. In addition, there are a couple of further optimizations:
Encode
should return IEnumerable<int>
and use yield
. There is no need to create a new list to return a sequence.BytePairEncoding(token).Split(' ').Select(x => TokenizerGpt3Settings.Encoder[x]).ToList();
- there is also no need for .ToList()
since AddRange
works just as well without it.I was wondering whether this has support for Codex.
Hi 👋
Is your feature request related to a problem? Please describe.
I would like to be able to pass CancellationToken to IOpenAIService
endpoints.
Describe the solution you'd like
For example when calling completions endpoint would be great to pass token like this
var result = await _service.Completions.CreateCompletion(new CompletionCreateRequest
{
..
},
cancellationToken: cancellationToken);
Describe alternatives you've considered
x
Additional context
x
Maybe I'm coming with this after someone already was thinking of this and you have thoughts / opinions already? Happy to prepare PR for this.
Thanks!
When i try the below code :
var imageResult = await openAiService.Image.CreateImage(new ImageCreateRequest
{
Prompt = "Laser cat eyes",
N = 2,
Size = StaticValues.ImageStatics.Size.Size256,
ResponseFormat = StaticValues.ImageStatics.ResponseFormat.Url,
User = "TEST"
});
i get error for image response like :
unhandled error: '<' is an invalid start of a value. Path: $ | LineNumber: 0 | BytePositionInLine: 0.
It should be able to get Image Result
Could you please help to solve it :
Severity Code Description Project File Line Suppression State
Error Could not install package 'Betalgo.OpenAI.GPT3 6.7.2'. You are trying to install this package into a project that targets '.NETFramework,Version=v4.7.2', but the package does not contain any assembly references or content files that are compatible with that framework. For more information, contact the package author.
Describe the bug
Dependency injection is not working for Azure Functions with this overload serviceCollection.AddOpenAIService();
It only works with the overload serviceCollection.AddOpenAIService(settings => { settings.ApiKey = "APIKEY" });
It seems that the constructor of the OpenAIService is not called in the expected order and so the OpenAiOptions param has the key empty.
Your code piece
https://github.com/elias-rod/ChatGptPoc/tree/di-error
(branch di-error)
My secrets.json is
{
"OpenAiOptions": {
"ApiKey": "xxx"
}
}
Error repro steps
Desktop (please complete the following information):
Would be nice to be able to support azure open ai services. I believe it’s the same api just different endpoints https://learn.microsoft.com/en-us/azure/cognitive-services/openai/quickstart?pivots=rest-api
I really liked the repository just wanted to ask how can we contribute?
The documentation for the Moderation API has the test backward. It should say "!= false" instead of "!= true", because "Flagged" is set to false if the content is OK, and true if the content is unacceptable.
I tested this with an actual call to the service, just to make sure I understood how it works.
Will you be adding support for gpt-3.5-turbo?
Thanks!
Is your feature request related to a problem? Please describe.
I have encountered a very important issue. My environment does not have direct access to apiopenapi.com, and I need an agent fu
Describe the solution you'd like
Support configuring proxy servers in options to achieve the effect of proxy request APIs
builder.Services.AddOpenAIService(settings => {
settings.ProxyUrl = "proxy serve url";
settings.ProxyPort = "proxy serve port";
});
Getting this error when installing
Install-Package Betalgo.OpenAI.GPT3
The package(s) come(s) from a package source that is not marked as trusted.
Are you sure you want to install software from 'nuget.org'?
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "N"): A
Install-Package: Dependency loop detected for package 'Betalgo.OpenAI.GPT3'.
Result
It runs really slow, and then fails.
Expected behavior
Install the package from the CLI
Desktop (please complete the following information):
Irrelevant Remarks
Great library by the way, and as someone who does not like to deal with REST APIs, having it exposed as native methods makes my code cleaner.
Is your feature request related to a problem? Please describe.
According to official API doc and Engadget news, ChatGPT's model is officially available as of today (2023.03.01). It would be nice if this library can support the corresponding names for the completion API.
Describe the solution you'd like
I think there should be minimal API change - essentially adding an enumeration. This feature request only concerns the completion API.
Describe alternatives you've considered
I tried to make a fork but the ModelNameBuilder
programming model is a bit confusing to me so I will leave this feature implementation to the original author😂
Additional context
Describe the bug
I'm trying to create a OpenAI with chat moderation features, and I found out that there is a small typo in Categories.Sexualminors, but CategoryScores with same json 'sexualminors' its using CategoryScores.SexualMinors
Your code piece
public static IEnumerable<float> GetScores(this CategoryScores scores)
{
yield return scores.Hate;
yield return scores.HateThreatening;
yield return scores.Selfharm;
yield return scores.Sexual;
yield return scores.SexualMinors;
yield return scores.Violence;
yield return scores.Violencegraphic;
}
public static IEnumerable<bool> GetCategories(this Categories category)
{
yield return category.Hate;
yield return category.HateThreatening;
yield return category.Selfharm;
yield return category.Sexual;
yield return category.Sexualminors;
yield return category.Violence;
yield return category.Violencegraphic;
}
Result
Expected behavior
It should same either Sexualminors or SexualMinors one of it but I prefer SexualMinors
Desktop (please complete the following information):
Additional context
Describe the bug
Exceptions are thrown when a transient service exists that leverages OpenAIService for a group of calls and the thread goes unused and naturally closes.
Your code piece
private readonly IOpenAIService? _openAIService;
private readonly ILocalSettings settingsService;
private readonly ILoggingService Logger;
private string _apikey;
private bool _disposed;
public OpenAIAPIService(ILocalSettings settingsService, ILoggingService logger)
{
this.settingsService = settingsService;
this.Logger = logger;
_apikey = settingsService.Load<string>("ApiKey");
if (String.IsNullOrEmpty(_apikey))
{
_apikey = "Api Key Is Null or Empty";
Logger.LogError("_apikey");
}
_openAIService = new OpenAIService(new OpenAiOptions()
{
ApiKey = _apikey
});
}
Result
A series of socket exceptions occur :
(The only socket code in this app is OpenAIService
Expected behavior
OpenAIService needs to gracefully close or be IDisposable
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
Additional context
Describe the bug
Today, we’re announcing that we will be deprecating the Answers, Classifications, and Search endpoints.
Since releasing these endpoints, we’ve developed new methods that achieve better results for these tasks. As a result, we’ll be removing the Answers, Classifications, and Search endpoints from our documentation and removing access to these endpoints on December 3, 2022.
We encourage anyone using these endpoints to switch over to newer techniques which produce better results, and have developed transition guides to help with this process.
Your code piece
Exception message:
Org org-OqrhlCeqzLz6NBDkPuE1o5IY does not have access to the answers endpoint, likely because it is deprecated. Please see https://community.openai.com/t/answers-classification-search-endpoint-deprecation/18532 for more information and reach out to [email protected] if you have any questions.
Result
If you are currently using these endpoints, this change will not immediately impact you. Prior to December 3, you will still be able to make requests and access the endpoint documentation by directly navigating to these pages:
Additional context
Link to the official message.
hi there:
I found that there is no correlation between the problems submitted using the API interface, I can't chat with it continuously, this is different from the feedback obtained by using chatgpt web system.
is this the method I use incorrect?
thank you!
Describe the bug
When calling CreateImageEdit method, and passing ImageEditCreateRequest object to the method, if the optional parameter "Mask" is not set, it throws an exception
Value cannot be null. (Parameter 'content')|System.ArgumentNullException: Value cannot be null. (Parameter 'content') at System.Net.Http.ByteArrayContent..ctor(Byte[] content) at OpenAI.GPT3.Managers.OpenAIService.CreateImageEdit(ImageEditCreateRequest imageEditCreateRequest)
Your code piece
var imageCreateResponse = await _openAIService.Image.CreateImageEdit(new ImageEditCreateRequest()
{
Size = "512x512",
N = 1,
ImageName = "123456",
Image = fileBytes,
User = Cryptography.SHA1Hash("123456789", true),
Prompt = "Robot, AI"
});
From the SDK source code, there is no check on Mask property before adding it to the request
multipartContent.Add(new ByteArrayContent(imageEditCreateRequest.Mask), "mask", imageEditCreateRequest.MaskName);
public async Task<ImageCreateResponse> CreateImageEdit(ImageEditCreateRequest imageEditCreateRequest)
{
var multipartContent = new MultipartFormDataContent();
if (imageEditCreateRequest.User != null) multipartContent.Add(new StringContent(imageEditCreateRequest.User), "user");
if (imageEditCreateRequest.ResponseFormat != null) multipartContent.Add(new StringContent(imageEditCreateRequest.ResponseFormat), "response_format");
if (imageEditCreateRequest.Size != null) multipartContent.Add(new StringContent(imageEditCreateRequest.Size), "size");
if (imageEditCreateRequest.N != null) multipartContent.Add(new StringContent(imageEditCreateRequest.N.ToString()!), "n");
multipartContent.Add(new StringContent(imageEditCreateRequest.Prompt), "prompt");
multipartContent.Add(new ByteArrayContent(imageEditCreateRequest.Image), "image", imageEditCreateRequest.ImageName);
multipartContent.Add(new ByteArrayContent(imageEditCreateRequest.Mask), "mask", imageEditCreateRequest.MaskName);
return await _httpClient.PostFileAndReadAsAsync<ImageCreateResponse>(_endpointProvider.ImageEditCreate(), multipartContent);
}
HI,
Im looking for best way to use GPT for text translation. Can you please advice me how, or what enpoint use it for that ? I would like to also pass instruction for context and glossary.
thank you for help
All Open AI APIs potentially impose rate limiting. Ideally, any library designed to abstract the APIs should support exponential backoff.
https://beta.openai.com/docs/guides/production-best-practices/managing-rate-limits-and-latency
hocam resim oluşturma örnek kodları anasayfaya koyarmısın dall -e için
Describe the bug
TokenizerGpt3.Encode incorrect
Your code piece
text = """
Many words map to one token, but some don't: indivisible.
Unicode characters like emojis may be split into many tokens containing the underlying bytes: 🤚🏾
Sequences of characters commonly found next to each other may be grouped together: 1234567890
""";
int n=TokenizerGpt3.Encode(text).Count;
Result
= 260
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
Additional context
Add any other context about the problem here.
Describe the bug
The SSL connection could not be established, see inner exception.
SocketException: An existing connection was forcibly closed by the remote host.
Your code piece
public async Task GenerateResponse2()
{
var openAiService = new OpenAIService(new OpenAiOptions()
{
ApiKey = apiKey
});
var completionResult = await openAiService.ChatCompletion.CreateCompletion(new ChatCompletionCreateRequest
{
Messages = new List<ChatMessage>
{
ChatMessage.FromSystem("You are a helpful assistant."),
ChatMessage.FromUser("Who won the world series in 2020?"),
ChatMessage.FromAssistant("The Los Angeles Dodgers won the World Series in 2020."),
ChatMessage.FromUser("Where was it played?")
},
Model = Models.ChatGpt3_5Turbo,
MaxTokens = 50//optional
});
if (completionResult.Successful)
{
Console.WriteLine(completionResult.Choices.First().Message.Content);
}
}
Describe the bug
I started getting a JsonException when calling CreateCompletion as of this morning with no change of code on my end. Wonder if OpenAI changed something?
Your code piece
try {
await GPTSemaphore.WaitAsync();
CompletionCreateResponse result = await api.Completions.CreateCompletion(completionRequest, "text-davinci-002");
if (result.Successful) {
var resultStrings = result.Choices.Select(c => c.Text).ToList();
foreach (var resultString in resultStrings) {
prompt.responses.Add(Util.CleanListString(resultString));
if (RuntimeConfig.settings.displayOptions.showResults) {
Util.WriteLineToConsole("---- RESULT ----", ConsoleColor.DarkCyan);
Util.WriteLineToConsole(resultString, ConsoleColor.Cyan);
}
}
GPTRequestsTotal++;
return resultStrings.Count;
} else {
// TODO: Handle failures in a smarter way
if (result.Error is not null) {
Console.WriteLine($"{result.Error.Code}: OpenAI = {result.Error.Message}");
}
}
throw new Exception("API Failure");
}
Result
Exception has occurred: CLR/System.Text.Json.JsonException
An unhandled exception of type 'System.Text.Json.JsonException' occurred in System.Private.CoreLib.dll: ''<' is an invalid start of a value. Path: $ | LineNumber: 0 | BytePositionInLine: 0.'
Inner exceptions found, see $exception in variables window for more details.
Innermost exception System.Text.Json.JsonReaderException : '<' is an invalid start of a value. LineNumber: 0 | BytePositionInLine: 0.
at System.Text.Json.ThrowHelper.ThrowJsonReaderException(Utf8JsonReader& json, ExceptionResource resource, Byte nextByte, ReadOnlySpan1 bytes) at System.Text.Json.Utf8JsonReader.ConsumeValue(Byte marker) at System.Text.Json.Utf8JsonReader.ReadFirstToken(Byte first) at System.Text.Json.Utf8JsonReader.ReadSingleSegment() at System.Text.Json.Utf8JsonReader.Read() at System.Text.Json.Serialization.JsonConverter
1.ReadCore(Utf8JsonReader& reader, JsonSerializerOptions options, ReadStack& state)
Expected behavior
Get a result back
Desktop (please complete the following information):
Windows, C#, 6.5.0 (and 6.4.0)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.