Giter Site home page Giter Site logo

brainlid / langchain Goto Github PK

View Code? Open in Web Editor NEW
521.0 21.0 59.0 862 KB

Elixir implementation of an AI focused LangChain style framework.

Home Page: https://hexdocs.pm/langchain/

License: Other

Elixir 100.00%
chatgpt elixir langchain llm ai anthropic bumblebee claude-ai

langchain's Introduction

Logo with chat chain links Elixir LangChain

Elixir LangChain enables Elixir applications to integrate AI services and self-hosted models into an application.

Currently supported AI services:

  • OpenAI ChatGPT
  • OpenAI DALL-e 2 - image generation
  • Anthropic Claude
  • Google AI - https://generativelanguage.googleapis.com
  • Google Vertex AI - Gemini
  • Ollama
  • Mistral
  • Bumblebee self-hosted models - including Llama, Mistral and Zephyr

LangChain is short for Language Chain. An LLM, or Large Language Model, is the "Language" part. This library makes it easier for Elixir applications to "chain" or connect different processes, integrations, libraries, services, or functionality together with an LLM.

LangChain is a framework for developing applications powered by language models. It enables applications that are:

  • Data-aware: connect a language model to other sources of data
  • Agentic: allow a language model to interact with its environment

The main value props of LangChain are:

  1. Components: abstractions for working with language models, along with a collection of implementations for each abstraction. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
  2. Off-the-shelf chains: a structured assembly of components for accomplishing specific higher-level tasks

Off-the-shelf chains make it easy to get started. For more complex applications and nuanced use-cases, components make it easy to customize existing chains or build new ones.

What is this?

Large Language Models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. But using these LLMs in isolation is often not enough to create a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge.

This library is aimed at assisting in the development of those types of applications.

Documentation

The online documentation can be found here.

Demo

Check out the demo project that you can download and review.

Relationship with JavaScript and Python LangChain

This library is written in Elixir and intended to be used with Elixir applications. The original libraries are LangChain JS/TS and LangChain Python.

The JavaScript and Python projects aim to integrate with each other as seamlessly as possible. The intended integration is so strong that that all objects (prompts, LLMs, chains, etc) are designed in a way where they can be serialized and shared between the two languages.

This Elixir version does not aim for parity with the JavaScript and Python libraries. Why not?

  • JavaScript and Python are both Object Oriented languages. Elixir is Functional. We're not going to force a design that doesn't apply.
  • The JS and Python versions started before conversational LLMs were standard. They put a lot of effort into preserving history (like a conversation) when the LLM didn't support it. We're not doing that here.

This library was heavily inspired by, and based on, the way the JavaScript library actually worked and interacted with an LLM.

Installation

The package can be installed by adding langchain to your list of dependencies in mix.exs:

def deps do
  [
    {:langchain, "0.2.0"}
  ]
end

The Release Candidate includes many additional features and some breaking changes.

def deps do
  [
    {:langchain, "0.3.0-rc.0"}
  ]
end

Configuration

Currently, the library is written to use the Req library for making API calls.

You can configure an organization ID, and API key for OpenAI's API, but this library also works with other compatible APIs as well as local models running on Bumblebee.

config/config.exs:

config :langchain, openai_key: System.get_env("OPENAI_API_KEY")
config :langchain, openai_org_id: System.get_env("OPENAI_ORG_ID")
# OR
config :langchain, openai_key: "YOUR SECRET KEY"
config :langchain, openai_org_id: "YOUR_OPENAI_ORG_ID"

It's possible to use a function or a tuple to resolve the secret:

config :langchain, openai_key: {MyApp.Secrets, :openai_api_key, []}
config :langchain, openai_org_id: {MyApp.Secrets, :openai_org_id, []}
# OR
config :langchain, openai_key: fn -> System.get_env("OPENAI_API_KEY") end
config :langchain, openai_org_id: fn -> System.get_env("OPENAI_ORG_ID") end

Usage

The central module in this library is LangChain.Chains.LLMChain. Most other pieces are either inputs to this, or structures used by it. For understanding how to use the library, start there.

Exposing a custom Elixir function to ChatGPT

A really powerful feature of LangChain is making it easy to integrate an LLM into your application and expose features, data, and functionality from your application to the LLM.

Diagram showing LLM integration to application logic and data through a LangChain.Function

A LangChain.Function bridges the gap between the LLM and our application code. We choose what to expose and using context, we can ensure any actions are limited to what the user has permission to do and access.

For an interactive example, refer to the project Livebook notebook "LangChain: Executing Custom Elixir Functions".

The following is an example of a function that receives parameter arguments.

alias LangChain.Function
alias LangChain.Message
alias LangChain.Chains.LLMChain
alias LangChain.ChatModels.ChatOpenAI

# map of data we want to be passed as `context` to the function when
# executed.
custom_context = %{
  "user_id" => 123,
  "hairbrush" => "drawer",
  "dog" => "backyard",
  "sandwich" => "kitchen"
}

# a custom Elixir function made available to the LLM
custom_fn =
  Function.new!(%{
    name: "custom",
    description: "Returns the location of the requested element or item.",
    parameters_schema: %{
      type: "object",
      properties: %{
        thing: %{
          type: "string",
          description: "The thing whose location is being requested."
        }
      },
      required: ["thing"]
    },
    function: fn %{"thing" => thing} = _arguments, context ->
      # our context is a pretend item/location location map
      {:ok, context[thing]}
    end
  })

# create and run the chain
{:ok, updated_chain, %Message{} = message} =
  LLMChain.new!(%{
    llm: ChatOpenAI.new!(),
    custom_context: custom_context,
    verbose: true
  })
  |> LLMChain.add_functions(custom_fn)
  |> LLMChain.add_message(Message.new_user!("Where is the hairbrush located?"))
  |> LLMChain.run(mode: :while_needs_response)

# print the LLM's answer
IO.puts(message.content)
#=> "The hairbrush is located in the drawer."

Alternative OpenAI compatible APIs

There are several services or self-hosted applications that provide an OpenAI compatible API for ChatGPT-like behavior. To use a service like that, the endpoint of the ChatOpenAI struct can be pointed to an API compatible endpoint for chats.

For example, if a locally running service provided that feature, the following code could connect to the service:

{:ok, updated_chain, %Message{} = message} =
  LLMChain.new!(%{
    llm: ChatOpenAI.new!(%{endpoint: "http://localhost:1234/v1/chat/completions"}),
  })
  |> LLMChain.add_message(Message.new_user!("Hello!"))
  |> LLMChain.run()

Bumblebee Chat Support

Bumblebee hosted chat models are supported. There is built-in support for Llama 2, Mistral, and Zephyr models.

Currently, function calling is NOT supported with these models.

ChatBumblebee.new!(%{
  serving: @serving_name,
  template_format: @template_format,
  receive_timeout: @receive_timeout,
  stream: true
})

The serving is the module name of the Nx.Serving that is hosting the model.

See the LangChain.ChatModels.ChatBumblebee documentation for more details.

Testing

To run all the tests including the ones that perform live calls against the OpenAI API, use the following command:

mix test --include live_call
mix test --include live_open_ai
mix test --include live_ollama_ai
mix test --include live_anthropic
mix test test/tools/calculator_test.exs --include live_call

NOTE: This will use the configured API credentials which creates billable events.

Otherwise, running the following will only run local tests making no external API calls:

mix test

Executing a specific test, whether it is a live_call or not, will execute it creating a potentially billable event.

When doing local development on the LangChain library itself, rename the .envrc_template to .envrc and populate it with your private API values. This is only used when running live test when explicitly requested.

Use a tool like Direnv or Dotenv to load the API values into the ENV when using the library locally.

langchain's People

Contributors

amokan avatar benjreinhart avatar benswift avatar bowyern avatar brainlid avatar cardosaum avatar chrisgreg avatar eltonfonseca avatar jadengis avatar matthusby avatar medoror avatar mryawe avatar petrus-jvrensburg avatar pkrawat1 avatar raulchedrese avatar ream88 avatar wojtekmach avatar yujonglee avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

langchain's Issues

Gemini Pro Issues

It looks like the current implementation of ChatGoogleAI doesn't work with the latest version of the Google Gemini API defined here.

This was enough to get it working for my immediate needs but there may be other issues.

So far the main differences I've come across are:

  • The Gemini API doesn't require a version in the URI path.
  • Response messages no longer contain an index field.
  • The API will not accept messages with an empty text field.

Not sure you'd want a new module for the newest Gemini API or to modify the existing ChatGoogleAI. Either way I'd be happy to put together a PR if it would help.

Upgrade Req library

A new version of the Req library was released before this library was published.

Upgrade to the latest Req. v0.4.x.

The API for streaming responses changed.

Running issue with ChatBumblebee & Llama2

I tried to use ChatBumblebee but it didn't work as expected

This is the livebook instruction I used:

https://gist.github.com/slashmili/ba0ac06a6346e793e357caf940a8a424

When I run the chain, I got lost of warnings :

13:55:43.944 [warning] Streaming call requested but no callback function was given.

And the answer was not as I expected:

COMBINED DELTA MESSAGE RESPONSE: %LangChain.Message{
  content: "I'm just an AI, I don't have access to your personal belongings or the layout of your home, so I cannot accurately locate your hairbrush. However, I can provide you with some general information about where hairbrushes are typically kept in a typical home.\n\nIn many households, hairbrushes are usually stored in a bathroom cabinet or on a bathroom countertop. Some people may also keep their hairbrushes in a dresser drawer or in a designated hair accessory case.\n\nIf you're having trouble finding your hairbrush, you might want to check these locations first. If you're still unable to find it, you could try asking other members of your household if they've seen it or check underneath your bed or in your closet.",
  index: nil,
  status: :complete,
  role: :assistant,
  function_name: nil,
  arguments: nil
}

seems like that it couldn't use Llama to find the right function.

Any idea what did I do wrong?

Update for OpenAI API changes to functions and tools

  • The list of functions is deprecated in favor of "tools" - docs
  • New tool_choice option to add support for. docs
  • function_call is deprecated in favor of tool_choice docs
  • Support receiving multiple function calls from assistant (reported as new ability with gpt-4-preview)

Implement the Router chain

The router chain (documented and implemented in Python version), uses classification and branching to change paths for the following prompts.

https://python.langchain.com/docs/modules/chains/foundational/router

Something I want the router to do is support bringing in different functions based on the context. I may have a LOT of potential functions that the LLM could have access to, but I don't want to clutter it and use up tokens when they aren't relevant to the current goal/context. So routing offers a good way for conditionally bringing in other behaviors/functions.

Feature Request: while_needs_response only for functions

What I want to achieve: function calling should be done as a single operation (not delta by delta) like the current implementation works. But content messages should be streamable and receive deltas.

So I can have the current convenience combined with the streaming elegance for the user at the same time! Is this possible somehow without lots of code or a manual loop?

Multiple tool calls: Frequent timeouts

I'm experiencing this frequency:

** (exit) exited in: Task.await_many([%Task{mfa: {:erlang, :apply, 2}, owner: #PID<0.27977.0>, pid: #PID<0.29053.0>, ref: #Reference<0.0.3581059.2102371376.702611457.169984>}], 5000)
    ** (EXIT) time out
    (elixir 1.15.7) lib/task.ex:969: Task.await_many/5
    (elixir 1.15.7) lib/task.ex:953: Task.await_many/2
    (langchain 0.2.0) lib/chains/llm_chain.ex:456: LangChain.Chains.LLMChain.execute_tool_calls/2
    (langchain 0.2.0) lib/chains/llm_chain.ex:189: LangChain.Chains.LLMChain.run_while_needs_response/1

Can we make the Task.await_many timeout configurable (either via config or pass as an optional arg in the run call)? Glad to submit a PR.

Unable to stream response from OpenAI after executing tool calls

Streaming works great normally, however after executing a tool call requested by the LLM (in this case OpenAI) I'm unable to figure out how to stream the final response.

I haven't found a reason why in the code or docs, is this a limitation of OpenAI? If so feel free to close this issue 🙂

Potentially related: #10

How to figure out rate limits?

For OpenAI they specify rate limits here. They add fields to the header to show, how many tokens are still left. To build something that respects those limits and retries after the limit hast been reset, it would be great to have those in the response some.
I quickly searched in the code but could not find anything. Is there currently a way to handle this?

Interrupting completion stream

Is there a way to interrupt the generation stream? It's technically possible, but I haven't found any mention in the docs.

It can be useful for user-facing frontends when a user can abort the answer of the assistant in the middle and rephrase the task.

OpenAI forum: https://community.openai.com/t/interrupting-completion-stream-in-python/30628

Ashton1998
Jun 2023
I make a simple test for @thehunmonkgroup 's solution.

I make a call to gpt-3.5-turbo model with input:

Please introduce GPT model structure as detail as possible
And let the api print all the token’s. The statistic result from OpenAI usage page is (I am a new user and is not allowed to post with >media, so I only copy the result):
17 prompt + 441 completion = 568 tokens

After that, I stop the generation when the number of token received is 9, the result is:
17 prompt + 27 completion = 44 tokens

It seems there are roughly extra 10 tokens generated after I stop the generation.

Then I stop the generation when the number is 100, the result is:
17 prompt + 111 completion = 128 tokens

So I think the solution work well but with extra 10~20 tokens every time.

`LLMChain.run()` for ollama doesn't work

alias LangChain.{Chains.LLMChain, Message, LangChain.ChatModels.ChatOllamaAI}
LLMChain.new!(%{
  llm: LangChain.ChatModels.ChatOllamaAI.new!(%{
    model: "mixtral:8x7b-instruct-v0.1-q4_K_M"
  }),
  verbose: true,
})
|> LLMChain.add_message(Message.new_user!("hello world!"))
|> LLMChain.run()
** (CaseClauseError) no case clause matching: {:ok, %LangChain.Message{content: " Hello! It's nice to see you. Is there something specific you would like to talk about or ask me a question? I'm here to help with any programming-related questions you have.\n\nIf you don't have anything specific in mind, that's okay too! We can just chat about whatever interests you. Do you enjoy coding? What are some of your favorite programming languages and why do you like them?\n\nI'm particularly fond of Python, because it's a great language for beginners to learn, with a clean and easy-to-understand syntax. It's also very versatile and can be used for a wide variety of applications, from web development to data analysis to machine learning.\n\nBut enough about me! I want to hear about you. What brings you to coding? Is there something specific you're hoping to learn or accomplish with your programming skills? Let me know and I'll do my best to help you out.", index: nil, status: :complete, role: :assistant, function_name: nil, arguments: nil}}
    (langchain 0.1.7) lib/chains/llm_chain.ex:209: LangChain.Chains.LLMChain.do_run/1
    (langchain 0.1.7) lib/chains/llm_chain.ex:170: LangChain.Chains.LLMChain.run/2
    iex:4: (file)

Optional dependencies/Behaviour-based 'tools'?

Given that the crawling effort was merged in (which adds deps for :floki and :crawly) and the existing :abacus dependency, is there a planned effort to make these sorts of deps optional and maybe implement some sort of behaviour to facilitate making things more flexible for consumers of the project?

I'd be willing to help on this if there is interest.

Nothing at all wrong with any of the deps, but not every use-case will need them.

OpenAI Function Call Support

Hi, I am coming from Python and really like what I have seen so far in the Elixir land. I am thinking about porting one of my LLM application to Elixir and one of the problem is that I rely heavily on OpenAI's function call feature.
OpenAI's function call (https://openai.com/blog/function-calling-and-other-api-updates) is a good way to output structure data with LLM and to use external tools. However, to use it wee need to write function parameter specification such as

{
        'name': 'extract_student_info',
        'description': 'Get the student information from the body of the input text',
        'parameters': {
            'type': 'object',
            'properties': {
                'name': {
                    'type': 'string',
                    'description': 'Name of the person'
                },
                'major': {
                    'type': 'string',
                    'description': 'Major subject.'
                }
            }
        }
    }

The python library Instructor (https://github.com/jxnl/instructor) provides a good way to write such specification using another python library Pydantic (https://docs.pydantic.dev/latest/), which is essentially a schema validation library.

Wondering if Elixir provides any way to specify such specification using something more convenience than plain map? If yes, whether this library is a good place to implement such feature?

Exposing usage data from API response

Hi @brainlid, first of all thanks for this project, it has been very useful to us so far.

We are running into the issue that we want to be able to track our token usage on OpenAI. This is given as part of the response, but I believe LangChain doesn't do anything with this information yet.

I am wondering if you would consider a PR to expose this data somehow.

And if you are, whether you have a preferred way to do this. We would probably be happy with simply making the raw response available somehow, as a trap door. But if you want to structure this data and translate it per API, we could also talk about that.

Thanks, Derek

Call code outside of langchain in routes

I have two related questions for routing:

  1. PromptRoute always requires a chain, but this feels quite limiting. It can be convenient to use the router's outcome as the end result of a pipeline—that is, sometimes I just need to know the selected route so I can delegate to the right part of my code. As is, I have to pass a chain in. It would be very handy to be able to pass in, say, a callback function, that simply gets the name of the chosen route.
  2. Related to that: In the evaluate function of the RoutingChain, the debugger helpfully logs the chosen route, but I can't access that in code, so if I want to do the above (run my own code depending on which route is chosen), I have to create dummy chains for every route, then pattern match on whichever chain is returned by the evaluate function. It's a lot of overhead when the name is right there, just out of reach. :)

Would it be possible to do any of the following:

  1. Set chain to optional on PromptRoute?
  2. Set a callback function on PromptRoute?
  3. In RoutingChain, create a new function that evaluates but just returns the name instead of a chain?

Basically what I'm looking for is any path to delegate out to my code from the existing routing structure without a lot of overhead that's ultimately thrown away. I do, of course, see the value of the chain in this pipeline... it's just that I also know that there are a lot of times when I don't need that overhead.

Also, it's very possible I'm missing something that would allow me to accomplish exactly what I'm asking about, and I've just missed entirely! Let me know which and I'm happy to help out.

Add support for Bumblebee functions?

Bumblebee doesn't support a constrained output of only valid JSON.

In the early days of LangChain, they implemented an alternate approach for functions that was more of a hack, but worked well enough.

Investigate if this approach could work for bringing functions to Bumblebee models. It would still help if the model being run understood JSON, functions, etc.

Vector store (pgvector/pinecone) support?

Thank you @brainlid for starting this project! Played with the two live notebooks, works very well! Simple and very clean, well designed interfaces. 👍

I am wondering what's the roadmap moving forward, especially around vector store support.

Would very much love to migrate my NodeJS langchain projects to Phoenix/Elixir.

Thanks again for the effort! Can't wait to write more using it ❤️

add Replicate API option

Even though it's just OpenAI for now the code is nice and modular and obviously extensible to other hosted LLM providers (🙌🏻).

I'm not sure if there's a roadmap somewhere that I've missed, but Replicate might be a good option for the next "platform" to be added. It's one place that Meta are putting up their various LLama models. However, I think it'd only support the LangChain.Message stuff - there's no function call support in the models as yet.

I'd be open to putting together a PR to add replicate support (their official Elixir client lib uses httpoison, so I guess it'd be better to just call the Replicate API directly using Req).

Would you be interested in accepting it? Happy to discuss implementation strategies, because I know the move from single -> multiple platform options introduces some decisions & tradeoffs.

I get `{:error, "Unexpected response. {:ok, %LangChain.Chains.LLMChain{ ... }}}` when using the DataExtractionChain

Given the following code in Livebook:

itinerary_1_day = """
Day 1:

Arrive in Delft and check-in at the Hotel De Emauspoort, a cozy boutique hotel located in the heart of the city.
"""

itinerary_day_schema_parameters = %{
  type: "object",
  properties: %{
    destination_name: %{type: "string"},
    destination_type: %{type: "string"}
  },
  required: []
}

# Model setup
{:ok, chat} = LangChain.ChatModels.ChatOpenAI.new(%{model: "gpt-3.5-turbo", temperature: 0, stream: false, verbose: true})

# run the chain on the text information
data_prompt = itinerary_1_day

{:ok, result} = LangChain.Chains.DataExtractionChain.run(chat, itinerary_day_schema_parameters, data_prompt)

I keep getting the following error when running the cell:

** (MatchError) no match of right hand side value: {:error, "Unexpected response. {:ok, %LangChain.Chains.LLMChain{llm: %LangChain.ChatModels.ChatOpenAI{endpoint: \"https://api.openai.com/v1/chat/completions\", model: \"gpt-3.5-turbo\", temperature: 0.0, frequency_penalty: 0.0, receive_timeout: 60000, n: 1, stream: false}, verbose: false, functions: [%LangChain.Function{name: \"information_extraction\", description: \"Extracts the relevant information from the passage.\", function: nil, parameters_schema: %{type: \"object\", required: [\"info\"], properties: %{info: %{type: \"array\", items: %{type: \"object\", required: [], properties: %{destination_name: %{type: \"string\"}, destination_type: %{type: \"string\"}}}}}}}], function_map: %{\"information_extraction\" => %LangChain.Function{name: \"information_extraction\", description: \"Extracts the relevant information from the passage.\", function: nil, parameters_schema: %{type: \"object\", required: [\"info\"], properties: %{info: %{type: \"array\", items: %{type: \"object\", required: [], properties: %{destination_name: %{type: \"string\"}, destination_type: %{type: \"string\"}}}}}}}}, messages: [%LangChain.Message{content: \"You are a helpful assistant that extracts structured data from text passages. Only use the functions you have been provided with.\", index: nil, status: :complete, role: :system, function_name: nil, arguments: nil}, %LangChain.Message{content: \"Extract and save the relevant entities mentioned in the following passage together with their properties.\\n\\n  Passage:\\n  Day 1:\\n\\nArrive in Delft and check-in at the Hotel De Emauspoort, a cozy boutique hotel located in the heart of the city.\\n\", index: nil, status: :complete, role: :user, function_name: nil, arguments: nil}, %LangChain.Message{content: nil, index: 0, status: :complete, role: :assistant, function_name: \"information_extraction\", arguments: %{\"info\" => [%{\"destination_name\" => \"Delft\", \"destination_type\" => \"City\"}, %{\"destination_name\" => \"Hotel De Emauspoort\", \"destination_type\" => \"Hotel\"}]}}], custom_context: nil, delta: nil, last_message: %LangChain.Message{content: nil, index: 0, status: :complete, role: :assistant, function_name: \"information_extraction\", arguments: %{\"info\" => [%{\"destination_name\" => \"Delft\", \"destination_type\" => \"City\"}, %{\"destination_name\" => \"Hotel De Emauspoort\", \"destination_type\" => \"Hotel\"}]}}, needs_response: true, callback_fn: nil}, %LangChain.Message{content: nil, index: 0, status: :complete, role: :assistant, function_name: \"information_extraction\", arguments: %{\"info\" => [%{\"destination_name\" => \"Delft\", \"destination_type\" => \"City\"}, %{\"destination_name\" => \"Hotel De Emauspoort\", \"destination_type\" => \"Hotel\"}]}}}"}
    (stdlib 5.1.1) erl_eval.erl:498: :erl_eval.expr/6
    /home/sylvester/Code/langchain-livebook-examples/embedding-test.livemd#cell:4ashbyu5o6nm3h56cbxgozcrutf33va7:7: (file)

It looks like a valid result, but it's giving me an error anyway. I'm not sure whether I'm doing something wrong or if this is a bug in LangChain.

Request for Community ChatModel: ChatOllama

I am interested in using the ollama as a self hosted llm model in the Elixir Langchain project. I was wondering if it's possible to extend the support to include ChatModel.ChatOllama as well.

Detail:
Python langchain_community.chat_models.ChatOllama

Request:
I kindly request the community's assistance or guidance on how to integrate ChatOllama into the Langchain project. I am new to Elixir, (no idea what I am doing most of the time) but I am eager to learn and contribute to this exciting project.

Make Elixir Function optional for LangChain.Function?

For workflows where we just need structured JSON outputs from the models (e.g. data extraction) through using tools, we may not need to execute any code in the client and send any messages back to the models. For such cases, does it make sense to make the function (Elixir Function) attribute optional for LangChain.Function?

(From Anthropic API Docs)

### How tool use works
Integrate external tools with Claude in these steps:

1. Provide Claude with tools and a user prompt
- Define tools with names, descriptions, and input schemas in your API request.
- Include a user prompt that might require these tools, e.g., “What’s the weather in San Francisco?”

2. Claude decides to use a tool
- Claude assesses if any tools can help with the user’s query.
- If yes, Claude constructs a properly formatted tool use request.
- The API response has a stop_reason of tool_use, signaling Claude’s intent.

3. Extract tool input, run code, and return results
- On your end, extract the tool name and input from Claude’s request.
- Execute the actual tool code client-side.
- Continue the conversation with a new user message containing a tool_result content block.

4. Claude uses tool result to formulate a response
- Claude analyzes the tool results to craft its final response to the original user prompt.

**Note: Steps 3 and 4 are optional. For some workflows, Claude’s tool use request (step 2) might be all you need, without sending results back to Claude.**

Support Azure OpenAI

I've managed to use Azure OpenAI with the following minor change:

diff --git a/lib/chat_models/chat_open_ai.ex b/lib/chat_models/chat_open_ai.ex
index 5dd610e..06b4d0c 100644
--- a/lib/chat_models/chat_open_ai.ex
+++ b/lib/chat_models/chat_open_ai.ex
@@ -57,13 +57,13 @@ defmodule LangChain.ChatModels.ChatOpenAI do
           {:ok, Message.t() | MessageDelta.t() | [Message.t() | MessageDelta.t()]}
           | {:error, String.t()}

-  @create_fields [:model, :temperature, :frequency_penalty, :n, :stream, :receive_timeout]
+  @create_fields [:endpoint, :model, :temperature, :frequency_penalty, :n, :stream, :receive_timeout]
   @required_fields [:model]

   @spec get_org_id() :: String.t() | nil
@@ -220,7 +220,7 @@ defmodule LangChain.ChatModels.ChatOpenAI do
       Req.new(
         url: openai.endpoint,
         json: for_api(openai, messages, functions),
-        auth: {:bearer, get_api_key()},
+        headers: %{"api-key" => get_api_key()},
         receive_timeout: openai.receive_timeout
       )

Hope to make a general base to adapt this change to support more chat models.

%Mint.TransportError{reason: :closed}

I'm intermittently getting %Mint.TransportError{reason: :closed} as a result from do_api_request/4 - has this been seen before or are there any known ideas why?

I can fire off the exact same request afterwards and it works most of them time but 1% of calls are failing...

EDIT: My team found that this is an ongoing error with Finch. Look at the latest comments here.

Are we able to swap the HTTP client out for another?

Upgrade abacus requirements to support elixir 1.17 and OTP 27

With elixir 1.17 and OTP 27, mix test will fail with the following error:

==> abacus
Compiling 3 files (.erl)
src/new_parser.yrl:54:7: syntax error before: 'else'
%   54|       {'else', '$5'}
%     |       ^

src/new_parser.erl:819:13: function yeccpars2_46_/1 undefined
%  819|  NewStack = yeccpars2_46_(Stack),
%     |             ^

src/new_parser.yrl:71:2: inlined function yeccpars2_46_/1 undefined
%   71| expr -> expr '/' expr : {'/', [], ['$1', '$3']}.
%     |  ^

could not compile dependency :abacus, "mix compile" failed. Errors may have been logged above. You can recompile this dependency with "mix deps.compile abacus --force", update it with "mix deps.update abacus" or clean it with "mix deps.clean abacus"

I've submitted a pr to abacus narrowtux/abacus#27 to fix it. Abacus requirements will need to be upgraded once it's released.

In the meantime, you can try the fix by using this in mix.exs:

{:abacus, github: "MrYawe/abacus"},

Support Llama 2

Add support for the Llama 2 LLM.

Specifically interested in supporting Nx/Bumblebee usage. A main complaint against OpenAI/ChatGPT and Google/Bard is that private data is being sent to an external entity that is probably being used for training data.

A fully locally hosted and business-use compatible solution is preferred.

Is it possible to use this library to send an image for use with the "gpt-4-vision-preview" model?

I am experimenting with different libs to work with the OpenAI GPT APIs. I am trying to work out how to send an image with LangChain but nothing I do seems to work. I had a similar issue with ExOpenAI library (now solved) which you can see here: dvcrn/ex_openai#13

I am trying to work out how to do something equivalent to this (which uses a raw HTTPoison's HTML post request):

  defp describe_image_using_httpoison(data, prompt) do
    payload = %{
      "model" => get_openai_description_model(),
      "messages" => [
        %{
          "role" => "user",
          "content" => [
            %{"type" => "text", "text" => prompt},
            %{
              "type" => "image_url",
              "image_url" => %{
                "url" => "data:image/jpeg;base64," <> data.image.data64
              }
            }
          ]
        }
      ],
      "max_tokens" => 1_000
    }

    case HTTPoison.post!(
           "https://api.openai.com/v1/chat/completions",
           Jason.encode!(payload),
           get_headers(),
           recv_timeout: 20000
         ) do
      %HTTPoison.Response{status_code: 200, body: body} ->
        case Jason.decode(body) do
          {:ok, content} ->
            [result | _] = content["choices"]
            description = result["message"]["content"]
            description

          error ->
            dbg(error)
        end

      error ->
        dbg(error)
    end
  end

That is just a snippet, but the "type" => "image_url"... bit is the bit I am trying to replicate with LangChain.

I have tried this:

  def describe(data, user_prompt \\ @desc_user_prompt) do
    {:ok, _updated_chain, response} =
      %{llm: ChatOpenAI.new!(%{model: @llm_model})}
      |> LLMChain.new!()
      |> LLMChain.add_messages([
        Message.new_system!(@desc_system_prompt),
        Message.new_user!(user_prompt),
        Message.new_user!(get_prompt_attrs_for_image_from_data(data.image))
      ])
      |> LLMChain.run()

    dbg(response)
    Map.put(data, :description, response)
    response.content
  end

  defp get_prompt_attrs_for_image_from_data(%STL.ML.ImageData{src: _, data64: imgdata64, error: _ }) do
    {:ok, content } = %{
      type: :image_url,
      image_url: %{url: "data:image/jpeg;base64," <> imgdata64}
    } |> Jason.encode()
    content
    # %{
    #   role: :user,
    #   content: %{
    #     type: :image_url,
    #     image_url: %{url: "data:image/jpeg;base64," <> imgdata64}
    #   }
    # }
  end

But no matter what I do with the function get_prompt_attrs_for_image_from_data nothing seems to work. If I just encode the content as a string, then the OpenAI API flips out with a "too many tokens" because the image data is too big. But anything other than a string for content causes a validation error from LangChain.

Is there any way to send arbitrary post params in a LangChain call?

PS: For reference, this is how OpenAI describes the type: :image_url params: https://platform.openai.com/docs/guides/vision

Support for Embeddings

Hey there!

I'd like to see LangChain supports for embeddings! Probably first the OpenAPI's embeddings and then HuggingFace and others

Is this something that you have thought about and have a plan? I'd love to get some guidance and give it a try.

Disclaimer: I'm new to this field, if I say something that doesn't make sense please point out.

`do_process_response` does not contain handling for all inputs.

In chat_open_ai.ex, the function do_process_response can either take a decoded Json, or a {:error, %Jason.DecodeError{}}.

In the definition of the function, we have the matching patterns for properly decoded json, but we lack a definition when the decoding fails.

Example when it fails to match input patterns is reproduced below. It comes from the langchain_demo project (really nice one!)

[debug] HANDLE EVENT "validate" in LangChainDemoWeb.AgentChatLive.Index
  Parameters: %{"_target" => ["chat_message", "content"], "chat_message" => %{"content" => "Hi!"}}
[debug] Replied in 497µs
[debug] HANDLE EVENT "save" in LangChainDemoWeb.AgentChatLive.Index
  Parameters: %{"chat_message" => %{"content" => "Hi!"}}
[debug] Replied in 370µs
[error] Task #PID<0.599.0> started from #PID<0.575.0> terminating
** (FunctionClauseError) no function clause matching in LangChain.ChatModels.ChatOpenAI.do_process_response/1
    (langchain 0.1.1) lib/chat_models/chat_open_ai.ex:413: LangChain.ChatModels.ChatOpenAI.do_process_response({:error, %Jason.DecodeError{position: 0, token: nil, data: <<31, 139, 8, 0, 0, 0, 0, 0, 0, 3, 76, 143, 177, 110, 3, 33, 16, 68, 251, 251, 138, 17, 181, 185, 139, 173, 40, 150, 248, 134, 148, 233, 207, 8, 54, 6, 9, 88, 12, 123, 78, 44, 203, 255, 30, ...>>}})
    (elixir 1.15.2) lib/enum.ex:1693: Enum."-map/2-lists^map/1-1-"/2
    (langchain 0.1.1) lib/chat_models/chat_open_ai.ex:346: LangChain.ChatModels.ChatOpenAI.decode_streamed_data/1
    (langchain 0.1.1) lib/chat_models/chat_open_ai.ex:262: anonymous fn/4 in LangChain.ChatModels.ChatOpenAI.do_api_request/4
    (finch 0.16.0) lib/finch/http1/conn.ex:243: Finch.Conn.receive_response/8
    (finch 0.16.0) lib/finch/http1/conn.ex:120: Finch.Conn.request/6
    (finch 0.16.0) lib/finch/http1/pool.ex:45: anonymous fn/8 in Finch.HTTP1.Pool.request/5
    (nimble_pool 1.0.0) lib/nimble_pool.ex:349: NimblePool.checkout!/4
    (finch 0.16.0) lib/finch/http1/pool.ex:38: Finch.HTTP1.Pool.request/5
    (finch 0.16.0) lib/finch.ex:306: anonymous fn/6 in Finch.stream/5
    (telemetry 1.2.1) /Users/mcs/git/github/brainlid/langchain_demo/deps/telemetry/src/telemetry.erl:321: :telemetry.span/3
    (langchain 0.1.1) lib/chat_models/chat_open_ai.ex:288: anonymous fn/6 in LangChain.ChatModels.ChatOpenAI.do_api_request/4
    (req 0.4.4) lib/req/request.ex:991: Req.Request.run_request/1
    (req 0.4.4) lib/req/request.ex:936: Req.Request.run/1
    (langchain 0.1.1) lib/chat_models/chat_open_ai.ex:318: LangChain.ChatModels.ChatOpenAI.do_api_request/4
    (langchain 0.1.1) lib/chat_models/chat_open_ai.ex:184: LangChain.ChatModels.ChatOpenAI.call/4
    (langchain 0.1.1) lib/chains/llm_chain.ex:204: LangChain.Chains.LLMChain.do_run/1
    (langchain 0.1.1) lib/chains/llm_chain.ex:186: LangChain.Chains.LLMChain.run_while_needs_response/1
    (langchain_demo 0.1.0) lib/langchain_demo_web/live/agent_chat_live/index.ex:315: anonymous fn/2 in LangChainDemoWeb.AgentChatLive.Index.run_chain/1
    (phoenix_live_view 0.20.0) lib/phoenix_live_view/async.ex:77: Phoenix.LiveView.Async.do_async/5
Function: #Function<8.28433447/0 in Phoenix.LiveView.Async.run_async_task/4>
    Args: []

GroqCloud support

Hi, thanks for creating this nice library 😊

I recently came accros GroqCloud. It seems like an alternative to OpenAI and is powered by open source moduls like Mixtral, Llama3 and Gemma. They have two big advantegaes over OpenAI which is speed and price.

So just qurious if it would be possible and makes sense to add support for GroqCloud?

Handle streamed JSON response data when broken up across multiple data rows

This is part of Issue #28 but not specific to Azure.

Data is sometimes reported being returned like the following:

DATA:- : "data: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\",\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" if\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" there\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" are\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" specific\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" details\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" about\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" the\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" new\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" developments\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" or\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" the\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" potential\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" value\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" they\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" could\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" bring\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" to\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" The\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" third\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"s"
DATA:- : "afe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" company\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\"2\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\",\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" be\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\n"
DATA:- : "data: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" sure\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" to\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" include\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" those\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" as\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" well\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\".\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\n"
DATA:- : "data: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":\"stop\",\"index\":0,\"delta\":{},\"content_filter_results\":{}}]}\n\n"
DATA:- : "data: [DONE]\n\n"
[error] Received invalid JSON: %Jason.DecodeError{position: 177, token: nil, data: "{\"id\":\"chatcmpl-8lZf0buihhFh5SCZrrKbnkiC7RFhu\",\"object\":\"chat.completion.chunk\",\"created\":1706349186,\"model\":\"gpt-3.5-turbo-1106\",\"system_fingerprint\":\"fp_b57c83dd65\",\"choices\":"}
[error] Received invalid JSON: %Jason.DecodeError{position: 74, token: nil, data: "[{\"index\":0,\"delta\":{\"content\":\".\"},\"logprobs\":null,\"finish_reason\":null}]}"}
[error] Received invalid JSON: %Jason.DecodeError{position: 68, token: nil, data: "{\"id\":\"chatcmpl-8lZf0buihhFh5SCZrrKbnkiC7RFhu\",\"object\":\"chat.comple"}
[error] Received invalid JSON: %Jason.DecodeError{position: 0, token: nil, data: "tion.chunk\",\"created\":1706349186,\"model\":\"gpt-3.5-turbo-1106\",\"system_fingerprint\":\"fp_b57c83dd65\",\"choices\":[{\"index\":0,\"delta\":{\"content\":\" a\"},\"logprobs\":null,\"finish_reason\":null}]}"}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.