Giter Site home page Giter Site logo

komayama / azure-functions-openai-extension Goto Github PK

View Code? Open in Web Editor NEW

This project forked from azure/azure-functions-openai-extension

0.0 0.0 0.0 301 KB

An extension that adds support for Azure OpenAI/ OpenAI bindings in Azure Functions for LLM (GPT-3.5-Turbo, GPT-4, etc)

License: MIT License

C# 100.00%

azure-functions-openai-extension's Introduction

Azure Functions bindings for OpenAI's GPT engine

License: MIT Build

This project adds support for OpenAI LLM (GPT-3.5-turbo, GPT-4) bindings in Azure Functions.

This extension depends on the Azure AI OpenAI SDK.

NuGet Packages

The following NuGet packages are available as part of this project.

NuGet
NuGet
NuGet

Requirements

  • .NET 6 SDK or greater (Visual Studio 2022 recommended)
  • Azure Functions Core Tools v4.x
  • Update settings in Azure Function or the local.settings.json file for local development with the following keys:
    1. AZURE_OPENAI_ENDPOINT - Azure OpenAI resource (e.g. https://***.openai.azure.com/) set. For authentication, use one of the below two options:
      • System Managed Identity - assign the user/function app Cognitive Services OpenAI User role on the Azure OpenAI resource. OR
      • AZURE_OPENAI_KEY - Key of the Azure OpenAI resource as a setting.
    2. OR OPENAI_API_KEY - Non-Azure Option - An OpenAI account and an API key saved into a setting.
      If using environment variables, Learn more in .env readme.
  • Azure Storage emulator such as Azurite running in the background
  • The target language runtime (e.g. .NET, Node.js, PowerShell, Python etc.) installed on your machine

Features

The following features are currently available. More features will be slowly added over time.

Text completion input binding

The textCompletion input binding can be used to invoke the OpenAI Chat Completions API and return the results to the function.

The examples below define "who is" HTTP-triggered functions with a hardcoded "who is {name}?" prompt, where {name} is the substituted with the value in the HTTP request path. The OpenAI input binding invokes the OpenAI GPT endpoint to surface the answer to the prompt to the function, which then returns the result text as the response content.

Setting a model is optional for non-Azure OpenAI, see here for default model values for OpenAI.

[FunctionName(nameof(WhoIs))]
public static string WhoIs(
    [HttpTrigger(AuthorizationLevel.Function, Route = "whois/{name}")] HttpRequest req,
    [TextCompletion("Who is {name}?", Model = "gpt-35-turbo")] TextCompletionResponse response)
{
    return response.Content;
}
import { app, input } from "@azure/functions";

// This OpenAI completion input requires a {name} binding value.
const openAICompletionInput = input.generic({
    prompt: 'Who is {name}?',
    maxTokens: '100',
    type: 'textCompletion',
    model: 'gpt-35-turbo'
})

app.http('whois', {
    methods: ['GET'],
    route: 'whois/{name}',
    authLevel: 'function',
    extraInputs: [openAICompletionInput],
    handler: async (_request, context) => {
        var response: any = context.extraInputs.get(openAICompletionInput)
        return { body: response.content.trim() }
    }
});
using namespace System.Net

param($Request, $TriggerMetadata, $TextCompletionResponse)

Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
        StatusCode = [HttpStatusCode]::OK
        Body       = $TextCompletionResponse.Content
    })

In the same directory as the PowerShell function, define the bindings in a function.json file.

If using Azure OpenAI, update the deployment name to model property in function.json for textCompletion input binding or use it to override the default model value for OpenAI.

{
    "type": "textCompletion",
    "direction": "in",
    "name": "TextCompletionResponse",
    "prompt": "Who is {name}?",
    "maxTokens": "100",
    "model": "gpt-3.5-turbo"
}

Setting a model is optional for non-Azure OpenAI, see here for default model values for OpenAI.

@app.route(route="whois/{name}", methods=["GET"])
@app.generic_input_binding(arg_name="response", type="textCompletion", data_type=func.DataType.STRING, prompt="Who is {name}?", maxTokens="100", model = "gpt-3.5-turbo")
def whois(req: func.HttpRequest, response: str) -> func.HttpResponse:
    response_json = json.loads(response)
    return func.HttpResponse(response_json["content"], status_code=200)

Running locally

You can run the above function locally using the Azure Functions Core Tools and sending an HTTP request, similar to the following:

GET http://localhost:7127/api/whois/pikachu

The result that comes back will include the response from the GPT language model:

HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Date: Tue, 28 Mar 2023 18:25:40 GMT
Server: Kestrel
Transfer-Encoding: chunked

Pikachu is a fictional creature from the Pokemon franchise. It is a yellow
mouse-like creature with powerful electrical abilities and a mischievous
personality. Pikachu is one of the most iconic and recognizable characters
from the franchise, and is featured in numerous video games, anime series,
movies, and other media.

You can find more instructions for running the samples in the corresponding project directories. The goal is to have samples for all languages supported by Azure Functions.

Chat bots

Chat completions are useful for building AI-powered chat bots.

There are three bindings you can use to interact with the chat bot:

  1. The chatBotCreate output binding creates a new chat bot with a specified system prompt.
  2. The chatBotPost output binding sends a message to the chat bot and saves the response in its internal state.
  3. The chatBotQuery input binding fetches the chat bot history and passes it to the function.

You can find samples in multiple languages with instructions in the chat samples directory.

Assistants

Assistants build on top of the chat bot functionality to provide chat bots with custom skills defined as functions. This internally uses the function calling feature of OpenAIs GPT models to select which functions to invoke and when.

You can define functions that can be triggered by chat bots by using the assistantSkillTrigger trigger binding. These functions are invoked by the extension when a chat bot signals that it would like to invoke a function in response to a user prompt.

The name of the function, the description provided by the trigger, and the parameter name are all hints that the underlying language model use to determine when and how to invoke an assistant function.

public class AssistantSkills
{
    readonly ITodoManager todoManager;

    // This constructor is called by the Azure Functions runtime's dependency injection container.
    public AssistantSkills(ITodoManager todoManager)
    {
        this.todoManager = todoManager ?? throw new ArgumentNullException(nameof(todoManager));
    }

    // Called by the assistant to create new todo tasks.
    [FunctionName(nameof(AddTodo))]
    public Task AddTodo([AssistantSkillTrigger("Create a new todo task")] string taskDescription)
    {
        string todoId = Guid.NewGuid().ToString()[..6];
        return this.todoManager.AddTodoAsync(new TodoItem(todoId, taskDescription));
    }

    // Called by the assistant to fetch the list of previously created todo tasks.
    [FunctionName(nameof(GetTodos))]
    public Task<IReadOnlyList<TodoItem>> GetTodos(
        [AssistantSkillTrigger("Fetch the list of previously created todo tasks")] object inputIgnored)
    {
        return this.todoManager.GetTodosAsync();
    }
}

You can find samples with instructions in the assistant samples directory.

Embeddings Generator

OpenAI's text embeddings measure the relatedness of text strings. Embeddings are commonly used for:

  • Search (where results are ranked by relevance to a query string)
  • Clustering (where text strings are grouped by similarity)
  • Recommendations (where items with related text strings are recommended)
  • Anomaly detection (where outliers with little relatedness are identified)
  • Diversity measurement (where similarity distributions are analyzed)
  • Classification (where text strings are classified by their most similar label)

Processing of the source text files typically involves chunking the text into smaller pieces, such as sentences or paragraphs, and then making an OpenAI call to produce embeddings for each chunk independently. Finally, the embeddings need to be stored in a database or other data store for later use.

[FunctionName(nameof(GenerateEmbeddings_Http_Request))]
public static void GenerateEmbeddings_Http_Request(
    [HttpTrigger(AuthorizationLevel.Function, "post", Route = "embeddings")] EmbeddingsRequest req,
    [Embeddings("{RawText}", InputType.RawText)] EmbeddingCreateResponse embeddingsResponse,
    ILogger logger)
{
    logger.LogInformation(
        "Received {count} embedding(s) for input text containing {length} characters.",
        embeddingsResponse.Data.Count,
        req.RawText.Length);

    // TODO: Store the embeddings into a database or other storage.
}

Semantic Search

The semantic search feature allows you to import documents into a vector database using an output binding and query the documents in that database using an input binding. For example, you can have a function that imports documents into a vector database and another function that issues queries to OpenAI using content stored in the vector database as context (also known as the Retrieval Augmented Generation, or RAG technique).

The supported list of vector databases is extensible, and more can be added by authoring a specially crafted NuGet package. Currently supported vector databases include:

More may be added over time.

This HTTP trigger function takes a path to a local file as input, generates embeddings for the file, and stores the result into Azure Data Explorer (a.k.a. Kusto).

public record EmbeddingsRequest(string FilePath);

[FunctionName("IngestEmail")]
public static async Task<IActionResult> IngestEmail(
    [HttpTrigger(AuthorizationLevel.Function, "post")] EmbeddingsRequest req,
    [Embeddings("{FilePath}", InputType.FilePath, Model = "text-embedding-ada-002")] EmbeddingsContext embeddings,
    [SemanticSearch("KustoConnectionString", "Documents", ChatModel = "gpt-3.5-turbo", EmbeddingsModel = "text-embedding-ada-002")] IAsyncCollector<SearchableDocument> output)
{
    string title = Path.GetFileNameWithoutExtension(req.FilePath);
    await output.AddAsync(new SearchableDocument(title, embeddings));
    return new OkObjectResult(new { status = "success", title, chunks = embeddings.Count });
}

This HTTP trigger function takes a query prompt as input, pulls in semantically similar document chunks into a prompt, and then sends the combined prompt to OpenAI. The results are then made available to the function, which simply returns that chat response to the caller.

public record SemanticSearchRequest(string Prompt);

[FunctionName("PromptEmail")]
public static IActionResult PromptEmail(
    [HttpTrigger(AuthorizationLevel.Function, "post")] SemanticSearchRequest unused,
    [SemanticSearch("KustoConnectionString", "Documents", Query = "{Prompt}", ChatModel = "gpt-3.5-turbo", EmbeddingsModel = "text-embedding-ada-002")] SemanticSearchContext result)
{
    return new ContentResult { Content = result.Response, ContentType = "text/plain" };
}

The responses from the above function will be based on relevant document snippets which were previously uploaded to the vector database. For example, assuming you uploaded internal emails discussing a new feature of Azure Functions that supports OpenAI, you could issue a query similar to the following:

POST http://localhost:7127/api/PromptEmail
Content-Type: application/json

{
    "Prompt": "Was a decision made to officially release an OpenAI binding for Azure Functions?"
}

And you might get a response that looks like the following (actual results may vary):

HTTP/1.1 200 OK
Content-Length: 454
Content-Type: text/plain

There is no clear decision made on whether to officially release an OpenAI binding for Azure Functions as per the email "Thoughts on Functions+AI conversation" sent by Bilal. However, he suggests that the team should figure out if they are able to free developers from needing to know the details of AI/LLM APIs by sufficiently/elegantly designing bindings to let them do the "main work" they need to do. Reference: Thoughts on Functions+AI conversation.

IMPORTANT: Azure OpenAI requires you to specify a deployment when making API calls instead of a model. The deployment is a specific instance of a model that you have deployed to your Azure OpenAI resource. In order to make code more portable across OpenAI and Azure OpenAI, the bindings in this extension use the Model, ChatModel and EmbeddingsModel to refer to either the OpenAI model or the Azure OpenAI deployment ID, depending on whether you're using OpenAI or Azure OpenAI.

All samples in this project rely on default model selection, which assumes the models are named after the OpenAI models. If you want to use an Azure OpenAI deployment, you'll want to configure the Model, ChatModel and EmbeddingsModel properties explicitly in your binding configuration. Here are a couple examples:

// "gpt-35-turbo" is the name of an Azure OpenAI deployment
[FunctionName(nameof(WhoIs))]
public static string WhoIs(
    [HttpTrigger(AuthorizationLevel.Function, Route = "whois/{name}")] HttpRequest req,
    [TextCompletion("Who is {name}?", Model = "gpt-35-turbo")] TextCompletionResponse response)
{
    return response.Content;
}
public record SemanticSearchRequest(string Prompt);

// "my-gpt-4" and "my-ada-2" are the names of Azure OpenAI deployments corresponding to gpt-4 and text-embedding-ada-002 models, respectively
[FunctionName("PromptEmail")]
public static IActionResult PromptEmail(
    [HttpTrigger(AuthorizationLevel.Function, "post")] SemanticSearchRequest unused,
    [SemanticSearch("KustoConnectionString", "Documents", Query = "{Prompt}", ChatModel = "my-gpt-4", EmbeddingsModel = "my-ada-2")] SemanticSearchContext result)
{
    return new ContentResult { Content = result.Response, ContentType = "text/plain" };
}

Default OpenAI models

  1. Chat Completion - gpt-3.5-turbo
  2. Embeddings - text-embedding-ada-002
  3. Text Completion - gpt-3.5-turbo

While using non-Azure OpenAI, you can omit the Model specification in attributes to use the default models.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

azure-functions-openai-extension's People

Contributors

manvkaur avatar aishwaryabh avatar cgillum avatar microsoftopensource avatar dependabot[bot] avatar paulyuk avatar shibayan avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.