Giter Site home page Giter Site logo

Comments (6)

krrishdholakia avatar krrishdholakia commented on May 20, 2024 1

I believe we're missing the usage details.

from litellm.

krrishdholakia avatar krrishdholakia commented on May 20, 2024

What is the expected behavior from litellm? expose a spend calculating function?

from litellm import cost_calculator_completion 

cost = cost_calculator_completion(llm_provider, input_tokens, output_tokens) 

??

from litellm.

krrishdholakia avatar krrishdholakia commented on May 20, 2024

I think what feels more natural is if litellm actually returned the number of tokens

  • i believe the package currently returns just the text, this makes it hard to calculate what the tokens even are (different for different providers). I believe the openai response object does return this, so it would be natural to me for this library to map to that.

from litellm.

krrishdholakia avatar krrishdholakia commented on May 20, 2024

I did something like this of my own, but it's obviously not ideal when dealing with multiple providers (if it's not openai then i'm just assuming num tokens = length of string)

 def logging_fn(self, model, scenario, messages, response, logger_fn=None):
      try: 
        self.print_verbose(f"the model is: {model}")
        status = "success"
        if logger_fn:
          logger_fn(model=model, messages=messages, response=response)
        
        # POST this data to the supabase logs 
        llm_provider = "azure" # assume azure by default
        if model in model_provider_map.keys():
          llm_provider = model_provider_map["model"]
        
        if scenario == Scenario.generate:
          if response: 
            input_text = " ".join([message["content"] for message in messages])
            output_text = response['choices'][0]['message']['content'] if response else "" 
            number_of_input_tokens = len(input_text)
            number_of_output_tokens = len(output_text)
            
            if llm_provider == "openai" or llm_provider == "azure":
              number_of_input_tokens = len(self.openai_encoding.encode(input_text)) 
              number_of_output_tokens = len(self.openai_encoding.encode(output_text))
          else:
            number_of_input_tokens = 0
            number_of_output_tokens = 0 
        
        elif scenario == Scenario.embed:
          self.print_verbose(f"input_text: {messages}")
          if isinstance(messages, str):
            input_text = messages
          elif isinstance(messages, list):
            input_text = " ".join([message for message in messages])

          number_of_input_tokens = len(self.openai_encoding.encode(input_text))
          number_of_output_tokens = 0 # embedding is priced on input

        data = {"user_email": "[email protected]", "model": {"llm_provider": llm_provider, "model_name": model}, "input_text": {"number_of_tokens": number_of_input_tokens}, "output_text": {"number_of_tokens": number_of_output_tokens}}
        supabase.table('plaid_logs').upsert(data).execute()
      except:
        self.print_verbose(f"An error occurred: {traceback.format_exc()}")
    ```

from litellm.

krrishdholakia avatar krrishdholakia commented on May 20, 2024

OpenAI response object:

{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "\n\nHello there, how may I assist you today?",
    },
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 9,
    "completion_tokens": 12,
    "total_tokens": 21
  }
}

from litellm.

ishaan-jaff avatar ishaan-jaff commented on May 20, 2024

Damn - both of us just had the same request, see #11
😱💘

from litellm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.