Giter Site home page Giter Site logo

Syntax error with main.py about developer HOT 2 CLOSED

smol-ai avatar smol-ai commented on July 3, 2024
Syntax error with main.py

from developer.

Comments (2)

kufton avatar kufton commented on July 3, 2024

I did some more digging and it looks like what is actually happening is I was hitting a token limit of some form. Here's the actually helpful (for once python debug's are helpful!)

Traceback (most recent call last): File "/pkg/modal/_container_entrypoint.py", line 330, in handle_input_exception yield File "/pkg/modal/_container_entrypoint.py", line 403, in call_function_sync res = fun(*args, **kwargs) File "/root/debugger.py", line 79, in generate_response response = openai.ChatCompletion.create(**params) File "/usr/local/lib/python3.9/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create response, _, api_key = requestor.request( File "/usr/local/lib/python3.9/site-packages/openai/api_requestor.py", line 230, in request resp, got_stream = self._interpret_response(result, stream) File "/usr/local/lib/python3.9/site-packages/openai/api_requestor.py", line 624, in _interpret_response self._interpret_response_line( File "/usr/local/lib/python3.9/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 35146678 tokens. Please reduce the length of the messages. Traceback (most recent call last): File "/pkg/modal/_container_entrypoint.py", line 330, in handle_input_exception yield File "/pkg/modal/_container_entrypoint.py", line 403, in call_function_sync res = fun(*args, **kwargs) File "/root/debugger.py", line 79, in generate_response response = openai.ChatCompletion.create(**params) File "/usr/local/lib/python3.9/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create response, _, api_key = requestor.request( File "/usr/local/lib/python3.9/site-packages/openai/api_requestor.py", line 230, in request resp, got_stream = self._interpret_response(result, stream) File "/usr/local/lib/python3.9/site-packages/openai/api_requestor.py", line 624, in _interpret_response self._interpret_response_line( File "/usr/local/lib/python3.9/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line raise self.handle_error_response( openai.error.APIError: Internal error { "error": { "message": "Internal error", "type": "internal_error", "param": null, "code": "internal_error" } } 500 {'error': {'message': 'Internal error', 'type': 'internal_error', 'param': None, 'code': 'internal_error'}} {'Date': 'Wed, 07 Jun 2023 00:27:46 GMT', 'Content-Type': 'application/json; charset=utf-8', 'Content-Length': '152', 'Connection': 'keep-alive', 'vary': 'Origin', 'x-ratelimit-limit-requests': '3500', 'x-ratelimit-limit-tokens': '90000', 'x-ratelimit-remaining-requests': '3499', 'x-ratelimit-remaining-tokens': '85903', 'x-ratelimit-reset-requests': '17ms', 'x-ratelimit-reset-tokens': '2.73s', 'x-request-id': '29f4e81e2e54151dc0783da1b02df82d', 'strict-transport-security': 'max-age=15724800; includeSubDomains', 'CF-Cache-Status': 'DYNAMIC', 'Server': 'cloudflare', 'CF-RAY': '7d34c4be6bfc3925-IAD', 'alt-svc': 'h3=":443"; ma=86400'} Traceback (most recent call last): File "/pkg/modal/_container_entrypoint.py", line 330, in handle_input_exception yield File "/pkg/modal/_container_entrypoint.py", line 403, in call_function_sync res = fun(*args, **kwargs) File "/root/debugger.py", line 79, in generate_response response = openai.ChatCompletion.create(**params) File "/usr/local/lib/python3.9/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create response, _, api_key = requestor.request( File "/usr/local/lib/python3.9/site-packages/openai/api_requestor.py", line 230, in request resp, got_stream = self._interpret_response(result, stream) File "/usr/local/lib/python3.9/site-packages/openai/api_requestor.py", line 624, in _interpret_response self._interpret_response_line( File "/usr/local/lib/python3.9/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line raise self.handle_error_response( openai.error.APIError: Internal error { "error": { "message": "Internal error", "type": "internal_error", "param": null, "code": "internal_error" } } 500 {'error': {'message': 'Internal error', 'type': 'internal_error', 'param': None, 'code': 'internal_error'}} {'Date': 'Wed, 07 Jun 2023 00:28:50 GMT', 'Content-Type': 'application/json; charset=utf-8', 'Content-Length': '152', 'Connection': 'keep-alive', 'vary': 'Origin', 'x-ratelimit-limit-requests': '3500', 'x-ratelimit-limit-tokens': '90000', 'x-ratelimit-remaining-requests': '3499', 'x-ratelimit-remaining-tokens': '85903', 'x-ratelimit-reset-requests': '17ms', 'x-ratelimit-reset-tokens': '2.73s', 'x-request-id': 'eda7165b6dc46f45ad0d94005952a39f', 'strict-transport-security': 'max-age=15724800; includeSubDomains', 'CF-Cache-Status': 'DYNAMIC', 'Server': 'cloudflare', 'CF-RAY': '7d34c59fab8d3925-IAD', 'alt-svc': 'h3=":443"; ma=86400'}

from developer.

kufton avatar kufton commented on July 3, 2024

Alrighty yes i patched this by using code from one of the demo videos:

`import sys
import os
import modal
import ast
import time # Import time for sleep function

stub = modal.Stub("smol-developer-v1")
generatedDir = "generated"
openai_image = modal.Image.debian_slim().pip_install("openai", "tiktoken")
openai_model = "gpt-4" # or 'gpt-3.5-turbo',
openai_model_max_tokens = 2000 # i wonder how to tweak this properly

@stub.function(
image=openai_image,
secret=modal.Secret.from_dotenv(),
retries=modal.Retries(
max_retries=3,
backoff_coefficient=2.0,
initial_delay=1.0,
),
# concurrency_limit=5,
# timeout=120,
)
def generate_response(system_prompt, user_prompt, *args):
import openai
import tiktoken

def reportTokens(prompt):
    encoding = tiktoken.encoding_for_model(openai_model)
    print("\033[37m" + str(len(encoding.encode(prompt))) + " tokens\033[0m" + " in prompt: " + "\033[92m" + prompt[:50] + "\033[0m")

openai.api_key = os.environ["OPENAI_API_KEY"]

messages = []
messages.append({"role": "system", "content": system_prompt})
reportTokens(system_prompt)
messages.append({"role": "user", "content": user_prompt})
reportTokens(user_prompt)
role = "assistant"
for value in args:
    messages.append({"role": role, "content": value})
    reportTokens(value)
    role = "user" if role == "assistant" else "assistant"

params = {
    "model": openai_model,
    "messages": messages,
    "max_tokens": openai_model_max_tokens,
    "temperature": 0,
}

response = openai.ChatCompletion.create(**params)
time.sleep(1)  # Add a delay of 1 second between API calls
reply = response.choices[0]["message"]["content"]
return reply`

Now I'm running into a different error which I think has been logged. So I'll close this out.
For reference it was a rate limiting issue what what I can tell, and what fixed it.

from developer.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.