Giter Site home page Giter Site logo

smol-ai / developer Goto Github PK

View Code? Open in Web Editor NEW
11.7K 11.7K 1.0K 185 KB

the first library to let you embed a developer agent in your own app!

Home Page: https://twitter.com/SmolModels

License: MIT License

Python 81.94% Makefile 0.59% JavaScript 12.09% HTML 2.22% CSS 3.16%

developer's People

Contributors

aksh-at avatar billysweird avatar cocktailpeanut avatar cuio avatar danmenzies avatar david-ademola avatar eltociear avatar fardeem avatar jakubno avatar jesse-michael-han avatar jonnyhoff avatar kilian avatar meirm avatar mlejva avatar searls avatar smit-parmar avatar swyxio avatar talboren avatar thatliuser avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

developer's Issues

Not working without Modal

I tried the instructions without Modal:

python main_no_modal.py "A basic React app that just displays a page that says 'Hello, World!'"

Result:

Traceback (most recent call last):
  File "/Users/dzso/Dev/smol-dev/main_no_modal.py", line 245, in <module>
    main(prompt, directory, file)
NameError: name 'file' is not defined. Did you mean: 'filter'?

Seems like the it's expecting another arg for file, but I'm not sure what to give it. I tried by giving it one more argument, test.js, but I get the same error.

TLS CA certifiation

Hi,

I keep getting OSError: and I have changed the location of the TLS CA certificate, created new ones, moved again, created a new PATH in the environment variable, but I keep getting the same message. Where am I going wrong?

File "/usr/local/lib/python3.11/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/requests/adapters.py", line 458, in send
self.cert_verify(conn, request.url, verify, cert)
File "/usr/local/lib/python3.11/site-packages/requests/adapters.py", line 261, in cert_verify
raise OSError(
OSError: Could not find a suitable TLS CA certificate bundle, invalid path: /root/certifi/cacert.pem

I had difficulty getting Modal up and running, but it worked after I moved the entire 'site-packages' from the Roaming folder to the Local folder.

Please help me.

'modal' is not recognized as the name of a cmdlet

unable to run: modal token new

Error:
modal : The term 'modal' is not recognized as the name of a cmdlet, function, script file,
or operable program. Check the spelling of the name, or if a path was included, verify that
the path is correct and try again.

testing and developing code2prompt.py

The first thing I wanted to try with code2prompt was to see if it could prompt itself into existence by looking at itself.

All I wanted to do was add a better readme file with indexed contents so that I can understand how to use smol dev better.

so I put the whole smol dev folder in the generated folder and ran code2prompt.

I got a token limit error.

I think if you fix this, you can just keep feeding it issues from here to have it develop and better itself.

Let's level up the smol developer.

Defining Requirements

I really struggled with getting my first prompt to feed to smol-dev, partly because my first prompt was way, way too big and included too much information.

Let's get smol-dev to ask for the requirements it needs to get started. I used this:

System prompt: You are a prompt generation robot. you need to gather information about user goals, objecives, examples of preferred output, and other relevant content. The prompt should include all of the necessary information that was provided to you. Ask follow up questions to the user until you have confidence you can produce the perfect prompt. Your return should be formatted clearly and optimized for gpt interactions. Start by asking user goals, desired output and additional information you may need.

User prompt: I want to create a python application using FastAPI that mimics an existing Ruby on Rails application we have. The existing rails app makes use of ActiveRecord for many of the API endpoints for things like updating the records via "edit". The application is a simple "cart" of "products", each "product" is identified by an integer. The database has an "account" for each user, and each "account" can have multiple "products" associated with it. The user may login and logout as well as resetting their password.

It then asked me a series of questions which I would submit using the "assistant" and "user" additional parameters to keep the dialog flowing, until it generated a final prompt.

I then used this to feed to the first stage of smol-dev

Adding Additional Context

In my case, I had additional context that I wanted to give to GPT, but would cause the prompt to blow up. So I would "fake" additional dialog with GPT to give it additional context. I also tried telling smol-dev to ask for more details if it needed them to be able to generate the code.

For example, in file generation I might use:

System Prompt:

    You are a Python FastAPI code generation AI robot.  You specialize in writing one single file of the requested application.  You generate a complete, easily readable and maintainable source code file including type annotations and doc strings which will achieve the user's goals.
    Ask follow up questions to the user until you have confidence you can produce the perfect file list.  Start by asking the user for additional information you may need to generate the file list.  Once you are confident you understand the problem, begin generating the code for the file.

    Only write valid code for the given filepath and file type, and return only the code.
    Do not add any other explanation, only return valid code for that file type.

    Remember that you must obey 3 things:
       - you are generating code for the file "controllers.py"
       - do not stray from the names of the files and the shared dependencies we have decided on
       - MOST IMPORTANT OF ALL - the purpose of our app is specified by the user prompt - every line of code you generate must be valid code.

    The files we have decided to generate are:
    - [the list of files]
  
    And the shared dependencies are:
    -  [the list of dependencies]

User Prompt: [The prompt generated above]

Assistant response (faked): Can you tell me more about the data model for the listings and carts? For example, what are the attributes of a "listing", and how are listings associated with carts and accounts? How are they stored in the database?

User prompt: The database model looks like this: [Schema]

Assistant response (faked): To clarify, do you want the Python FastAPI application to have the same functionality as the existing Ruby on Rails app, including the use of ActiveRecord for the API endpoints? Are there any specific endpoints or features that are especially important to replicate?

User prompt: Here is a list of the endpoints I want to provide: [API endpoints]

It would then go on to generate code.

Thoughts?

Prompt to Use ChatGPT Plus instead

Not having access to GPT4 via API, and also trying to limit cost, I'm trying to use ChatGPT Plus to run this.
https://github.com/acheong08/ChatGPT-to-API offers a simple solution for this, and works great.
But the output files contain extraneous "chat-ey" text. For example:

"Here's the code for the file pi_digits.py that returns the first 100 digits of pi, backwards:

import math

def get_pi_digits():
    pi = str(math.pi)
    pi_digits = pi[:1:-1]
    return pi_digits[:100]

print(get_pi_digits())

This code imports the math module, defines a function get_pi_digits() that calculates the value of pi and retrieves the first 100 digits in reverse order. Finally, it prints the result using the print() function.
"

So close to code! Trying to find a modified prompt or system prompts to get ChatGPT to act like OpenAI API.

(Off-topic: Yes, I know its code output won't spit out 100 digits. It's my favorite LLM code challenge. GPT4 will code it right, and I'm hoping debug.py will get it there.)

error when writing inside folders

I'm getting this all the time. With and Without Modal

IsADirectoryError: [Errno 21] Is a directory: 'generated/src/'

It seems that smol is trying to write directly to the folder, instead to the appropriate file.

Generalise the directory cleaning rules

Read the note that there was hardcoded logic not to delete images.

Maybe this could work a bit like a gitignore file where you can put globs for files to ignore when cleaning

smol-ignore

**/*.jpg
**/*.png

code2prompt wastes tokens on __pycache__ and other junk files when executing walk_directory

I dont know how to do a PR, but I saw that it was trying to parse junk files in walk_directory, so I added the following to skip pycache.. maybe other folders are a good idea also.

def walk_directory(directory):
    code_contents = {}
    for root, dirs, files in os.walk(directory):
        if '__pycache__' in dirs:
            dirs.remove('__pycache__')  # don't visit __pycache__ directories
        for file in files:
            if not any(file.endswith(ext) for ext in EXTENSION_TO_SKIP):
                try:
                    relative_filepath = os.path.relpath(os.path.join(root, file), directory)
                    code_contents[relative_filepath] = read_file(os.path.join(root, file))
                except Exception as e:
                    code_contents[relative_filepath] = f"Error reading file {file}: {str(e)}"
    return code_contents

I getting error any time using modal (i have the access)

PS F:\WPPTESTS\developer-main> modal run main.py --prompt app dashboard
Usage: modal run main.py [OPTIONS]
Try 'modal run main.py --help' for help.
โ•ญโ”€ Error โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎโ”‚ Got unexpected extra argument (dashboard) โ”‚โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏPS F:\WPPTESTS\developer-main> modal run main.py --prompt "app dashboard"
โœ“ Initialized. View app at
https://modal.com/apps/ap-rszKOu9o7GTMUrv34n8yYs
โœ“ Created objects.
โ”œโ”€โ”€ ๐Ÿ”จ Created generate_response.
โ”œโ”€โ”€ ๐Ÿ”จ Created mount F:\WPPTESTS\developer-main\main.py
โ””โ”€โ”€ ๐Ÿ”จ Created generate_file.
hi its me, ๐Ÿฃthe smol developer๐Ÿฃ! you said you wanted:
app dashboard
89 tokens in prompt: You are an AI developer who is trying to write a p
2 tokens in prompt: app dashboard
Traceback (most recent call last):
File "/pkg/modal/_container_entrypoint.py", line 330, in handle_input_exception
yield
File "/pkg/modal/_container_entrypoint.py", line 403, in call_function_sync
res = fun(*args, **kwargs)
File "/root/main.py", line 52, in generate_response
response = openai.ChatCompletion.create(**params)
File "/usr/local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 230, in request
resp, got_stream = self._interpret_response(result, stream)
File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 624, in _interpret_response
self._interpret_response_line(
File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line
raise self.handle_error_response(
openai.error.AuthenticationError:
89 tokens in prompt: You are an AI developer who is trying to write a p
2 tokens in prompt: app dashboard
Traceback (most recent call last):
File "/pkg/modal/_container_entrypoint.py", line 330, in handle_input_exception
yield
File "/pkg/modal/_container_entrypoint.py", line 403, in call_function_sync
res = fun(*args, **kwargs)
File "/root/main.py", line 52, in generate_response
response = openai.ChatCompletion.create(**params)
File "/usr/local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 230, in request
resp, got_stream = self._interpret_response(result, stream)
File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 624, in _interpret_response
self._interpret_response_line(
File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line
raise self.handle_error_response(
openai.error.AuthenticationError:
89 tokens in prompt: You are an AI developer who is trying to write a p
2 tokens in prompt: app dashboard
Traceback (most recent call last):
File "/pkg/modal/_container_entrypoint.py", line 330, in handle_input_exception
yield
File "/pkg/modal/_container_entrypoint.py", line 403, in call_function_sync
res = fun(*args, **kwargs)
File "/root/main.py", line 52, in generate_response
response = openai.ChatCompletion.create(**params)
File "/usr/local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 230, in request
resp, got_stream = self._interpret_response(result, stream)
File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 624, in _interpret_response
self._interpret_response_line(
File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line
raise self.handle_error_response(
openai.error.AuthenticationError:
89 tokens in prompt: You are an AI developer who is trying to write a p
2 tokens in prompt: app dashboard
Traceback (most recent call last):
File "/pkg/modal/_container_entrypoint.py", line 330, in handle_input_exception
yield
File "/pkg/modal/_container_entrypoint.py", line 403, in call_function_sync
res = fun(*args, **kwargs)
File "/root/main.py", line 52, in generate_response
response = openai.ChatCompletion.create(**params)
File "/usr/local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 230, in request
resp, got_stream = self._interpret_response(result, stream)
File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 624, in _interpret_response
self._interpret_response_line(
File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line
raise self.handle_error_response(
openai.error.AuthenticationError:
Error in sys.excepthook:
Traceback (most recent call last):
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\rich\console.py", line 1699, in print
extend(render(renderable, render_options))
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\rich\console.py", line 1335, in render
yield from self.render(render_output, _options)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\rich\console.py", line 1331, in render
for render_output in iter_render:
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\rich\constrain.py", line 29, in rich_console
yield from console.render(self.renderable, child_options)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\rich\console.py", line 1331, in render
for render_output in iter_render:
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\rich\panel.py", line 220, in rich_console
lines = console.render_lines(renderable, child_options, style=style)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\rich\console.py", line 1371, in render_lines
lines = list(
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\rich\segment.py", line 292, in split_and_crop_lines
for segment in segments:
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\rich\console.py", line 1331, in render
for render_output in iter_render:
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\rich\padding.py", line 97, in rich_console
lines = console.render_lines(
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\rich\console.py", line 1371, in render_lines
lines = list(
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\rich\segment.py", line 292, in split_and_crop_lines
for segment in segments:
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\rich\console.py", line 1335, in render
yield from self.render(render_output, _options)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\rich\console.py", line 1331, in render
for render_output in iter_render:
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\rich\syntax.py", line 611, in rich_console
segments = Segments(self._get_syntax(console, options))
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\rich\segment.py", line 668, in init
self.segments = list(segments)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\rich\syntax.py", line 639, in _get_syntax
text = self.highlight(processed_code, self.line_range)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\rich\syntax.py", line 512, in highlight
text.append_tokens(tokens_to_spans())
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\rich\text.py", line 991, in append_tokens
for content, style in tokens:
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\rich\syntax.py", line 498, in tokens_to_spans
_token_type, token = next(tokens)
KeyboardInterrupt

Original exception was:
Traceback (most recent call last):
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return run_code(code, main_globals, None,
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\Scripts\modal.exe_main
.py", line 7, in
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\modal_main
.py", line 6, in main
entrypoint_cli()
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1130, in call
return self.main(*args, **kwargs)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\typer\core.py", line 778, in main
return _main(
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\typer\core.py", line 216, in _main
rv = self.invoke(ctx)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\click\decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\modal\cli\run.py", line 116, in f
func(*args, **kwargs)
File "F:\WPPTESTS\developer-main\main.py", line 125, in main
filepaths_string = generate_response.call(
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\synchronicity\synchronizer.py", line 497, in proxy_method
return wrapped_method(instance, *args, **kwargs)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\synchronicity\combined_types.py", line 26,
in call
raise uc_exc.exc from None
File ":/root/main.py", line 52, in generate_response
File ":/usr/local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
File ":/usr/local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
File ":/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 230, in request
File ":/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 624, in _interpret_response
File ":/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line
openai.error.AuthenticationError:
PS F:\WPPTESTS\developer-main> modal run main.py --prompt prompt2.md

error while installing "utils" module

C:\Program โ”‚
โ”‚ Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\runpy. โ”‚
โ”‚ py:196 in _run_module_as_main โ”‚
โ”‚ โ”‚
โ”‚ 195 โ”‚ โ”‚ sys.argv[0] = mod_spec.origin โ”‚
โ”‚ โฑ 196 โ”‚ return run_code(code, main_globals, None, โ”‚
โ”‚ 197 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ "main", mod_spec) โ”‚
โ”‚ โ”‚
โ”‚ C:\Program โ”‚
โ”‚ Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\runpy. โ”‚
โ”‚ py:86 in run_code โ”‚
โ”‚ โ”‚
โ”‚ 85 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ spec = mod_spec) โ”‚
โ”‚ โฑ 86 โ”‚ exec(code, run_globals) โ”‚
โ”‚ 87 โ”‚ return run_globals โ”‚
โ”‚ โ”‚
โ”‚ D:\smol\smol\Scripts\modal.exe_main
.py:7 in โ”‚
โ”‚ โ”‚
โ”‚ โ”‚
โ”‚ D:\smol\smol\lib\site-packages\modal_main
.py:6 in main โ”‚
โ”‚ โ”‚
โ”‚ 5 def main(): โ”‚
โ”‚ โฑ 6 โ”‚ entrypoint_cli() โ”‚
โ”‚ 7 โ”‚
โ”‚ โ”‚
โ”‚ D:\smol\smol\lib\site-packages\click\core.py:1130 in call โ”‚
โ”‚ โ”‚
โ”‚ 1129 โ”‚ โ”‚ """Alias for :meth:main.""" โ”‚
โ”‚ โฑ 1130 โ”‚ โ”‚ return self.main(*args, **kwargs) โ”‚
โ”‚ 1131 โ”‚
โ”‚ โ”‚
โ”‚ D:\smol\smol\lib\site-packages\typer\core.py:778 in main โ”‚
โ”‚ โ”‚
โ”‚ 777 โ”‚ ) -> Any: โ”‚
โ”‚ โฑ 778 โ”‚ โ”‚ return _main( โ”‚
โ”‚ 779 โ”‚ โ”‚ โ”‚ self, โ”‚
โ”‚ โ”‚
โ”‚ D:\smol\smol\lib\site-packages\typer\core.py:216 in _main โ”‚
โ”‚ โ”‚
โ”‚ 215 โ”‚ โ”‚ โ”‚ with self.make_context(prog_name, args, **extra) as ctx: โ”‚
โ”‚ โฑ 216 โ”‚ โ”‚ โ”‚ โ”‚ rv = self.invoke(ctx) โ”‚
โ”‚ 217 โ”‚ โ”‚ โ”‚ โ”‚ if not standalone_mode: โ”‚
โ”‚ โ”‚
โ”‚ D:\smol\smol\lib\site-packages\click\core.py:1657 in invoke โ”‚
โ”‚ โ”‚
โ”‚ 1656 โ”‚ โ”‚ โ”‚ โ”‚ with sub_ctx: โ”‚
โ”‚ โฑ 1657 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ return _process_result(sub_ctx.command.invoke(sub_ctx)) โ”‚
โ”‚ 1658 โ”‚
โ”‚ โ”‚
โ”‚ D:\smol\smol\lib\site-packages\click\core.py:1657 in invoke โ”‚
โ”‚ โ”‚
โ”‚ 1656 โ”‚ โ”‚ โ”‚ โ”‚ with sub_ctx: โ”‚
โ”‚ โฑ 1657 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ return _process_result(sub_ctx.command.invoke(sub_ctx)) โ”‚
โ”‚ 1658 โ”‚
โ”‚ โ”‚
โ”‚ D:\smol\smol\lib\site-packages\click\core.py:1404 in invoke โ”‚
โ”‚ โ”‚
โ”‚ 1403 โ”‚ โ”‚ if self.callback is not None: โ”‚
โ”‚ โฑ 1404 โ”‚ โ”‚ โ”‚ return ctx.invoke(self.callback, **ctx.params) โ”‚
โ”‚ 1405 โ”‚
โ”‚ โ”‚
โ”‚ D:\smol\smol\lib\site-packages\click\core.py:760 in invoke โ”‚
โ”‚ โ”‚
โ”‚ 759 โ”‚ โ”‚ โ”‚ with ctx: โ”‚
โ”‚ โฑ 760 โ”‚ โ”‚ โ”‚ โ”‚ return __callback(*args, **kwargs) โ”‚
โ”‚ 761 โ”‚
โ”‚ โ”‚
โ”‚ D:\smol\smol\lib\site-packages\click\decorators.py:26 in new_func โ”‚
โ”‚ โ”‚
โ”‚ 25 โ”‚ def new_func(*args, **kwargs): # type: ignore โ”‚
โ”‚ โฑ 26 โ”‚ โ”‚ return f(get_current_context(), *args, **kwargs) โ”‚
โ”‚ 27 โ”‚
โ”‚ โ”‚
โ”‚ D:\smol\smol\lib\site-packages\modal\cli\run.py:116 in f โ”‚
โ”‚ โ”‚
โ”‚ 115 โ”‚ โ”‚ โ”‚ else: โ”‚
โ”‚ โฑ 116 โ”‚ โ”‚ โ”‚ โ”‚ func(*args, **kwargs) โ”‚
โ”‚ 117 โ”‚ โ”‚ โ”‚ if app.function_invocations == 0: โ”‚
โ”‚ โ”‚
โ”‚ D:\smol\smol\developer\main.py:115 in main โ”‚
โ”‚ โ”‚
โ”‚ โ”‚
โ”‚ :883 in exec_module โ”‚
โ”‚ โ”‚
โ”‚ :241 in _call_with_frames_removed โ”‚
โ”‚ โ”‚
โ”‚ โ”‚
โ”‚ /root/main.py:4 in โ”‚
โ”‚ โ”‚
โ”‚ โฑ 4 from utils import clean_dir โ”‚
โ”‚ โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
ModuleNotFoundError: No module named 'utils'

rate limits

I think the repo seems to be making too many API calls to OpenAI in successive fashion. Any plans to to rate limit this?:

openai.error.RateLimitError: The server is currently overloaded with other requests. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if the error persists.

Cannot find module named 'modal'

I tried to research this, but I am unable to find this python module or missing file. Is this missing from the repo or in another repo, command, or package?

error in code?

after editing the main.py to use gtp-3.5 I ran the prompt
modal run main.py --prompt "a Chrome extension that, when clicked, opens a small window with a page where you can enter a prompt for reading the currently open page and generating some response from openai"

It came back with AttributeError: 'tuple' object has no attribute 'startswith'

from the lines:
` ...Remote call to Modal Function (ta-VdjFotWssx4N39TvWiMX1a)...

/root/main.py:40 in generate_response

โฑ 40 reportTokens(system_prompt)

/root/main.py:30 in reportTokens

โฑ 30 encoding = tiktoken.encoding_for_model(openai_model)

/usr/local/lib/python3.10/site-packages/tiktoken/model.py:66 in encoding_for_model

โฑ 66 if model_name.startswith(model_prefix): `

No access to claude api key, can't use

What do you suggest for those of us without a clause api key to use this? I applied for one but it doesn't seem like it is going to be provided to me anytime soon.

Support full token limits

Nice project. I saw a note here:

openai_model_max_tokens = 2000 # i wonder how to tweak this properly

You can get the total tokens for a request, then subtract it from the max tokens the model allows. Here's the cookbook implementation: https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb

Without so many models to support you can cut it down, here's an example I grabbed from a codebase I've got for generating full applications from prompts too:

def num_tokens_from_messages(messages):
    """Returns the number of tokens used by a list of messages."""
    try:
        encoding = tiktoken.encoding_for_model("gpt-4")
    except KeyError:
        encoding = tiktoken.get_encoding("cl100k_base")
    num_tokens = 0
    for message in messages:
        num_tokens += (
            4  # every message follows <im_start>{role/name}\n{content}<im_end>\n
        )
        for key, value in message.items():
            num_tokens += len(encoding.encode(value))
            if key == "name":  # if there's a name, the role is omitted
                num_tokens += -1  # role is always required and always 1 token
    num_tokens += 2  # every reply is primed with <im_start>assistant
    return num_tokens

How to incorporate data from a vactor db?

I want to generate code based on text files and git repos so that the script knows to build its responses for code generation using the included index/embeddings for reference. How could I add support for this in this project?

Invalid Character error return both by main.py and main_no_modal.py

Hi,
I'm trying this prompt:

"write a python project for a tic-tac-toe game with a gui. remember also to include requirements.txt and a shell file that launches the game.
There should be 3 levels:
1) dumb (randomly marks)
2) intelligent (does it's strategy)
3) Cheater"

This is the output for main_no_modal.py (but the same error occurs using main.py)

...
Here are the filepaths for the tic-tac-toe game with a GUI:

  • tic_tac_toe.py: This file will contain the main logic for the game.
  • gui.py: This file will contain the code for the graphical user interface.
  • requirements.txt: This file will list all the dependencies required to run the program.
  • launch.sh: This shell script will launch the game.

Here is the complete list of filepaths:

tic_tac_toe/
โ”œโ”€โ”€ tic_tac_toe.py
โ”œโ”€โ”€ gui.py
โ”œโ”€โ”€ requirements.txt
โ””โ”€โ”€ launch.sh

Note: The tic_tac_toe/ directory is optional and is used to group all the files related to the game together.
Traceback (most recent call last):
File "/home/rikbon/developer/main_no_modal.py", line 241, in
main(prompt, directory, file)
File "/home/rikbon/developer/main_no_modal.py", line 138, in main
list_actual = ast.literal_eval(filepaths_string)
File "/usr/lib/python3.10/ast.py", line 62, in literal_eval
node_or_string = parse(node_or_string.lstrip(" \t"), mode='eval')
File "/usr/lib/python3.10/ast.py", line 50, in parse
return compile(source, filename, mode, flags,
File "", line 12
โ”œโ”€โ”€ tic_tac_toe.py
^
SyntaxError: invalid character 'โ”œ' (U+251C)
...

Any idea?
Thanks in advance.

Use variables in prompts

Just started to look at this project today and very excited about it as I am working on multiple AI projects myself.

Just a suggestions, not sure where to put it as I am quite new to using Github:

In the prompt You can acctually use variables and GPT understands how to use them.

generate_file uses prompt and filename multiple times.

You can start the prompt with:

PROMPT="....."
FILENAME="...."

and then just use PROMPT and FILENAME within the prompt text.

`filepaths_string` does not consistently generate correct Python code

Given this prompt:
An example web page using React and HTML. It should have a text box for the user to enter some text with a button to submit it, and on submission another field should be updated to show the text that was put in.
I don't get a parseable Python list when using gpt-3.5-turbo:
image
By using the same type of refining statements given in generate_file, I was able to produce a proper response.
image

Rate limit requests?

SUPER FUN PROJECT!

Ran it using the following prompt:

a flask app that serves a react frontend powered by create-react-app 
and also includes authentication. The react app should have a login
form and a page that is only accessible when you login in.

Errors out with

Rate limit reached for default-gpt-4 in organization org-<> on tokens 
per min. Limit: 40000 / min. Please try again in 1ms. Contact us through 
our help center at help.openai.com if you continue to have issues.

How can we rate limit it? I'm not entirely sure how Modal fits into this and parallelizes it.

invalid syntax error "I understand. Here is a list of filepaths that you would need to create the program: "

The script seems to get hung up on an invalid syntax error when providing detailed requirements in the prompt. Also I have noticed that using "" and , in the prompt.md seems to throw the script off. Only keeping it simple without quotation marks or commas, the prompt executes.

The error:

โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ C:\Users\Node_01\AppData\Local\Programs\Python\Python310\lib\runpy.py:196 in _run_module_as_main โ”‚
โ”‚ โ”‚
โ”‚ 195 โ”‚ โ”‚ sys.argv[0] = mod_spec.origin โ”‚
โ”‚ โฑ 196 โ”‚ return run_code(code, main_globals, None, โ”‚
โ”‚ 197 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ "main", mod_spec) โ”‚
โ”‚ โ”‚
โ”‚ C:\Users\Node_01\AppData\Local\Programs\Python\Python310\lib\runpy.py:86 in run_code โ”‚
โ”‚ โ”‚
โ”‚ 85 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ spec = mod_spec) โ”‚
โ”‚ โฑ 86 โ”‚ exec(code, run_globals) โ”‚
โ”‚ 87 โ”‚ return run_globals โ”‚
โ”‚ โ”‚
โ”‚ C:\Users\Node_01\AppData\Local\Programs\Python\Python310\Scripts\modal.exe_main
.py:7 in โ”‚
โ”‚ โ”‚
โ”‚ โ”‚
โ”‚ โ”‚
โ”‚ C:\Users\Node_01\AppData\Local\Programs\Python\Python310\lib\site-packages\modal_main
.py:6 โ”‚
โ”‚ in main โ”‚
โ”‚ โ”‚
โ”‚ 5 def main(): โ”‚
โ”‚ โฑ 6 โ”‚ entrypoint_cli() โ”‚
โ”‚ 7 โ”‚
โ”‚ โ”‚
โ”‚ C:\Users\Node_01\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py:1130 in โ”‚
โ”‚ call โ”‚
โ”‚ โ”‚
โ”‚ 1129 โ”‚ โ”‚ """Alias for :meth:main.""" โ”‚
โ”‚ โฑ 1130 โ”‚ โ”‚ return self.main(*args, **kwargs) โ”‚
โ”‚ 1131 โ”‚
โ”‚ โ”‚
โ”‚ C:\Users\Node_01\AppData\Local\Programs\Python\Python310\lib\site-packages\typer\core.py:778 in โ”‚
โ”‚ main โ”‚
โ”‚ โ”‚
โ”‚ 777 โ”‚ ) -> Any: โ”‚
โ”‚ โฑ 778 โ”‚ โ”‚ return _main( โ”‚
โ”‚ 779 โ”‚ โ”‚ โ”‚ self, โ”‚
โ”‚ โ”‚
โ”‚ C:\Users\Node_01\AppData\Local\Programs\Python\Python310\lib\site-packages\typer\core.py:216 in โ”‚
โ”‚ _main โ”‚
โ”‚ โ”‚
โ”‚ 215 โ”‚ โ”‚ โ”‚ with self.make_context(prog_name, args, **extra) as ctx: โ”‚
โ”‚ โฑ 216 โ”‚ โ”‚ โ”‚ โ”‚ rv = self.invoke(ctx) โ”‚
โ”‚ 217 โ”‚ โ”‚ โ”‚ โ”‚ if not standalone_mode: โ”‚
โ”‚ โ”‚
โ”‚ C:\Users\Node_01\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py:1657 in โ”‚
โ”‚ invoke โ”‚
โ”‚ โ”‚
โ”‚ 1656 โ”‚ โ”‚ โ”‚ โ”‚ with sub_ctx: โ”‚
โ”‚ โฑ 1657 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ return _process_result(sub_ctx.command.invoke(sub_ctx)) โ”‚
โ”‚ 1658 โ”‚
โ”‚ โ”‚
โ”‚ C:\Users\Node_01\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py:1657 in โ”‚
โ”‚ invoke โ”‚
โ”‚ โ”‚
โ”‚ 1656 โ”‚ โ”‚ โ”‚ โ”‚ with sub_ctx: โ”‚
โ”‚ โฑ 1657 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ return _process_result(sub_ctx.command.invoke(sub_ctx)) โ”‚
โ”‚ 1658 โ”‚
โ”‚ โ”‚
โ”‚ C:\Users\Node_01\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py:1404 in โ”‚
โ”‚ invoke โ”‚
โ”‚ โ”‚
โ”‚ 1403 โ”‚ โ”‚ if self.callback is not None: โ”‚
โ”‚ โฑ 1404 โ”‚ โ”‚ โ”‚ return ctx.invoke(self.callback, **ctx.params) โ”‚
โ”‚ 1405 โ”‚
โ”‚ โ”‚
โ”‚ C:\Users\Node_01\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py:760 in โ”‚
โ”‚ invoke โ”‚
โ”‚ โ”‚
โ”‚ 759 โ”‚ โ”‚ โ”‚ with ctx: โ”‚
โ”‚ โฑ 760 โ”‚ โ”‚ โ”‚ โ”‚ return __callback(*args, **kwargs) โ”‚
โ”‚ 761 โ”‚
โ”‚ โ”‚
โ”‚ C:\Users\Node_01\AppData\Local\Programs\Python\Python310\lib\site-packages\click\decorators.py:2 โ”‚
โ”‚ 6 in new_func โ”‚
โ”‚ โ”‚
โ”‚ 25 โ”‚ def new_func(*args, **kwargs): # type: ignore โ”‚
โ”‚ โฑ 26 โ”‚ โ”‚ return f(get_current_context(), *args, **kwargs) โ”‚
โ”‚ 27 โ”‚
โ”‚ โ”‚
โ”‚ C:\Users\Node_01\AppData\Local\Programs\Python\Python310\lib\site-packages\modal\cli\run.py:116 โ”‚
โ”‚ in f โ”‚
โ”‚ โ”‚
โ”‚ 115 โ”‚ โ”‚ โ”‚ else: โ”‚
โ”‚ โฑ 116 โ”‚ โ”‚ โ”‚ โ”‚ func(*args, **kwargs) โ”‚
โ”‚ 117 โ”‚ โ”‚ โ”‚ if app.function_invocations == 0: โ”‚
โ”‚ โ”‚
โ”‚ C:\Users\Node_01\Documents\copilot coding\developer\main_2.py:133 in main โ”‚
โ”‚ โ”‚
โ”‚ 132 โ”‚ try: โ”‚
โ”‚ โฑ 133 โ”‚ โ”‚ list_actual = ast.literal_eval(filepaths_string) โ”‚
โ”‚ 134 โ”‚
โ”‚ โ”‚
โ”‚ C:\Users\Node_01\AppData\Local\Programs\Python\Python310\lib\ast.py:64 in literal_eval โ”‚
โ”‚ โ”‚
โ”‚ 63 โ”‚ if isinstance(node_or_string, str): โ”‚
โ”‚ โฑ 64 โ”‚ โ”‚ node_or_string = parse(node_or_string.lstrip(" \t"), mode='eval') โ”‚
โ”‚ 65 โ”‚ if isinstance(node_or_string, Expression): โ”‚
โ”‚ โ”‚
โ”‚ C:\Users\Node_01\AppData\Local\Programs\Python\Python310\lib\ast.py:50 in parse โ”‚
โ”‚ โ”‚
โ”‚ 49 โ”‚ # Else it should be an int giving the minor version for 3.x. โ”‚
โ”‚ โฑ 50 โ”‚ return compile(source, filename, mode, flags, โ”‚
โ”‚ 51 โ”‚ โ”‚ โ”‚ โ”‚ _feature_version=feature_version) โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ I understand. Here is a list of filepaths that you would need to create the program: โ”‚
โ”‚ โ–ฒ โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
SyntaxError: invalid syntax

Characters like `+` throws parsing error

Running python main_no_modal.py with prompt including characters like ; and : would throw the following error, seems to be related to one of the parsing (.lstrip(" \t")).

Edit: After some experimentation, it seems like it is the character + instead.

Traceback (most recent call last):
  File "~/Desktop/developer/main_no_modal.py", line 230, in <module>
    main(prompt, directory, file)
  File "~/Desktop/developer/main_no_modal.py", line 135, in main
    list_actual = ast.literal_eval(filepaths_string)
  File "/usr/lib/python3.10/ast.py", line 62, in literal_eval
    node_or_string = parse(node_or_string.lstrip(" \t"), mode='eval')
  File "/usr/lib/python3.10/ast.py", line 50, in parse
    return compile(source, filename, mode, flags,
  File "<unknown>", line 1
    - /src/index.js
      ^
SyntaxError: invalid syntax

Just for reproducibility:

  1. install the dependencies
  2. export the OpenAI API Key
  3. runpython main_no_modal.py "a react app that is luxurious and dark-themed, offering a simple form to RSVP to the wedding; Form has following fields: first name, last name, checkbox indicating a +1" -> this will not work and will throw the earlier error
  4. run python main_no_modal.py "a react app that is luxurious and dark-themed, offering a simple form to RSVP to the wedding." -> this will work.

Rate limit reached

i am receiving this error when trying to run with modal:

openai.error.RateLimitError: Rate limit reached for default-gpt-3.5-turbo in organization org-o8pnXYMZiNBFvGP0dQH0exyg on requests per min. Limit: 3 / min. Please try again in 20s.

filepaths error

For more complex file structures the app runs into errors because it cannot read the filepaths from the filepath string.
In order to avoid a filepaths error i would suggest modyfing the filepaths prompt as below. It worked for me much better.

`# call openai api with this prompt
filepaths_string = generate_response(
"""You are an AI developer who is trying to write a program that will generate code for the user based on their intent.

When given their intent, create a complete, exhaustive list of files including their paths that the user would write to make the program.

Prepare your respone in below format:

"['file1.txt', 'file2.txt', 'folder1/file3.txt', 'folder1/file4.txt', 'folder1/folder2/file5.txt', 'folder1/folder2/file6.txt']"

do not add any further explanation for automatic processing.
""",
    prompt,
)`

Issue .git / .idx File missing, Permission Denied

Hi i fresh cloned the rep and followed the instructions, but now I am stuck getting it to run.

I startup normal: modal run main.py --prompt "

I can see the smold doing its thing, using tokens: 89 tokens in prompt: .. and so on:

[
"frontend/next.config.js",
"frontend/pages/index.js",
"frontend/pages/_app.js",
"frontend/pages/_document.js",
"frontend/pages/blog/[slug].js",
"frontend/components/Header.js",
"frontend/components/Footer.js",
"frontend/components/PostCard.js",
"frontend/components/PostContent.js",
"frontend/lib/api.js",
"frontend/styles/global.css",
"backend/wordpress/wp-config.php",
"backend/wordpress/wp-content/themes/custom-theme/functions.php",
"backend/wordpress/wp-content/plugins/custom-plugin/custom-plugin.php"
]

but then it crashes:

(skipped traceback)

PermissionError: [WinError 5] Access denied: '<censored>\.git\\objects\\pack\\pack-<id-removed>.idx'

Investigating the folder there is no .idx file with the same id name but a different one. Is it having a wrong id, or anyone has a better id? if you need it i will attach the traceback.

Please keep developing this!

Idea for development: Organise the about.txt and user manual more neatly. Maybe use GPT4 to write it using a good template?

SyntaxError: invalid decimal literal

I keep running into this error. Any Idea what this means ?

Traceback (most recent call last):
File "C:\Users\User\Desktop\developer\main_no_modal.py", line 244, in
main(prompt, directory, file)
File "C:\Users\User\Desktop\developer\main_no_modal.py", line 141, in main
list_actual = ast.literal_eval(filepaths_string)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\ast.py", line 64, in literal_eval
node_or_string = parse(node_or_string.lstrip(" \t"), mode='eval')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\ast.py", line 50, in parse
return compile(source, filename, mode, flags,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 18
- /database/migrations/2021_01_01_create_users_table.php
^
SyntaxError: invalid decimal literal

Incompatibility Issue: ast.literal_eval() Fails to Parse the generated Filepath String

The script main_no_modal.py encounters a SyntaxError when attempting to parse the string returned by generate_response using Python's ast.literal_eval() function.

The AI's generate_response function currently returns a string representing a list of files in the following format:

- manifest.json
- popup.html
- popup.js
- background.js
- style.css
- words.json (or any other file containing the sentences for typing)

Unfortunately, ast.literal_eval() expects a string formatted as a valid Python expression, which the current output is not. The incompatibility leads to a SyntaxError during the execution of ast.literal_eval(filepaths_string). The error message is as follows:

File "C:\Users\parij\AppData\Local\Programs\Python\Python39\lib\ast.py", line 62, in literal_eval
    node_or_string = parse(node_or_string, mode='eval')
  File "C:\Users\parij\AppData\Local\Programs\Python\Python39\lib\ast.py", line 50, in parse
    return compile(source, filename, mode, flags,
  File "<unknown>", line 2
    - popup.html
    ^
SyntaxError: invalid syntax

The issue can potentially be resolved by modifying the output of generate_response to be a valid Python list string or by transforming the AI's output into a format that ast.literal_eval() can process correctly before calling the function.

Steps to reproduce:

  1. Run the following prompt
python main_no_modal.py "a Manifest V3 Chrome extension that offers a pop-up, distraction-free environment for typing practice. It should generate varied, random sentences for typing and start a one-minute timer as the user begins. After the timer expires, display the user's typing speed in words per minute and add a reset button for a new round. Make sure the size of the pop-up is minimum 500 height * 600 width"
  1. Note the SyntaxError that is thrown when the ast.literal_eval() function is called with the output of the generate_response function.

Any additional information or suggestions to resolve this issue are appreciated.

Syntax error with main.py

I'm assuming I've done something wrong here as it was working 'fine' (seemed to generate half-complete files and reference non-existent libraries but was still something to work with.
I updated my prompt cleared the generated directory and hit the go button and....

โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ /Users/kalebufton/Documents/Code Ethical/self-ai/aidev/bin/modal:10 in <module>                  โ”‚
โ”‚                                                                                                  โ”‚
โ”‚    9 โ”‚   sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])                        โ”‚
โ”‚ โฑ 10 โ”‚   sys.exit(main())                                                                        โ”‚
โ”‚   11                                                                                             โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /Users/kalebufton/Documents/Code                                                                 โ”‚
โ”‚ Ethical/self-ai/aidev/lib/python3.9/site-packages/modal/__main__.py:6 in main                    โ”‚
โ”‚                                                                                                  โ”‚
โ”‚    5 def main():                                                                                 โ”‚
โ”‚ โฑ  6 โ”‚   entrypoint_cli()                                                                        โ”‚
โ”‚    7                                                                                             โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /Users/kalebufton/Documents/Code                                                                 โ”‚
โ”‚ Ethical/self-ai/aidev/lib/python3.9/site-packages/click/core.py:1130 in __call__                 โ”‚
โ”‚                                                                                                  โ”‚
โ”‚   1129 โ”‚   โ”‚   """Alias for :meth:`main`."""                                                     โ”‚
โ”‚ โฑ 1130 โ”‚   โ”‚   return self.main(*args, **kwargs)                                                 โ”‚
โ”‚   1131                                                                                           โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /Users/kalebufton/Documents/Code                                                                 โ”‚
โ”‚ Ethical/self-ai/aidev/lib/python3.9/site-packages/typer/core.py:778 in main                      โ”‚
โ”‚                                                                                                  โ”‚
โ”‚   777 โ”‚   ) -> Any:                                                                              โ”‚
โ”‚ โฑ 778 โ”‚   โ”‚   return _main(                                                                      โ”‚
โ”‚   779 โ”‚   โ”‚   โ”‚   self,                                                                          โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /Users/kalebufton/Documents/Code                                                                 โ”‚
โ”‚ Ethical/self-ai/aidev/lib/python3.9/site-packages/typer/core.py:216 in _main                     โ”‚
โ”‚                                                                                                  โ”‚
โ”‚   215 โ”‚   โ”‚   โ”‚   with self.make_context(prog_name, args, **extra) as ctx:                       โ”‚
โ”‚ โฑ 216 โ”‚   โ”‚   โ”‚   โ”‚   rv = self.invoke(ctx)                                                      โ”‚
โ”‚   217 โ”‚   โ”‚   โ”‚   โ”‚   if not standalone_mode:                                                    โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /Users/kalebufton/Documents/Code                                                                 โ”‚
โ”‚ Ethical/self-ai/aidev/lib/python3.9/site-packages/click/core.py:1657 in invoke                   โ”‚
โ”‚                                                                                                  โ”‚
โ”‚   1656 โ”‚   โ”‚   โ”‚   โ”‚   with sub_ctx:                                                             โ”‚
โ”‚ โฑ 1657 โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   return _process_result(sub_ctx.command.invoke(sub_ctx))               โ”‚
โ”‚   1658                                                                                           โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /Users/kalebufton/Documents/Code                                                                 โ”‚
โ”‚ Ethical/self-ai/aidev/lib/python3.9/site-packages/click/core.py:1657 in invoke                   โ”‚
โ”‚                                                                                                  โ”‚
โ”‚   1656 โ”‚   โ”‚   โ”‚   โ”‚   with sub_ctx:                                                             โ”‚
โ”‚ โฑ 1657 โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   return _process_result(sub_ctx.command.invoke(sub_ctx))               โ”‚
โ”‚   1658                                                                                           โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /Users/kalebufton/Documents/Code                                                                 โ”‚
โ”‚ Ethical/self-ai/aidev/lib/python3.9/site-packages/click/core.py:1404 in invoke                   โ”‚
โ”‚                                                                                                  โ”‚
โ”‚   1403 โ”‚   โ”‚   if self.callback is not None:                                                     โ”‚
โ”‚ โฑ 1404 โ”‚   โ”‚   โ”‚   return ctx.invoke(self.callback, **ctx.params)                                โ”‚
โ”‚   1405                                                                                           โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /Users/kalebufton/Documents/Code                                                                 โ”‚
โ”‚ Ethical/self-ai/aidev/lib/python3.9/site-packages/click/core.py:760 in invoke                    โ”‚
โ”‚                                                                                                  โ”‚
โ”‚    759 โ”‚   โ”‚   โ”‚   with ctx:                                                                     โ”‚
โ”‚ โฑ  760 โ”‚   โ”‚   โ”‚   โ”‚   return __callback(*args, **kwargs)                                        โ”‚
โ”‚    761                                                                                           โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /Users/kalebufton/Documents/Code                                                                 โ”‚
โ”‚ Ethical/self-ai/aidev/lib/python3.9/site-packages/click/decorators.py:26 in new_func             โ”‚
โ”‚                                                                                                  โ”‚
โ”‚    25 โ”‚   def new_func(*args, **kwargs):  # type: ignore                                         โ”‚
โ”‚ โฑ  26 โ”‚   โ”‚   return f(get_current_context(), *args, **kwargs)                                   โ”‚
โ”‚    27                                                                                            โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /Users/kalebufton/Documents/Code                                                                 โ”‚
โ”‚ Ethical/self-ai/aidev/lib/python3.9/site-packages/modal/cli/run.py:116 in f                      โ”‚
โ”‚                                                                                                  โ”‚
โ”‚   115 โ”‚   โ”‚   โ”‚   else:                                                                          โ”‚
โ”‚ โฑ 116 โ”‚   โ”‚   โ”‚   โ”‚   func(*args, **kwargs)                                                      โ”‚
โ”‚   117 โ”‚   โ”‚   โ”‚   if app.function_invocations == 0:                                              โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /Users/kalebufton/Documents/Code Ethical/self-ai/developer/main.py:129 in main                   โ”‚
โ”‚                                                                                                  โ”‚
โ”‚   128 โ”‚   try:                                                                                   โ”‚
โ”‚ โฑ 129 โ”‚   โ”‚   list_actual = ast.literal_eval(filepaths_string)                                   โ”‚
โ”‚   130                                                                                            โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /Users/kalebufton/opt/anaconda3/lib/python3.9/ast.py:62 in literal_eval                          โ”‚
โ”‚                                                                                                  โ”‚
โ”‚     61 โ”‚   if isinstance(node_or_string, str):                                                   โ”‚
โ”‚ โฑ   62 โ”‚   โ”‚   node_or_string = parse(node_or_string, mode='eval')                               โ”‚
โ”‚     63 โ”‚   if isinstance(node_or_string, Expression):                                            โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /Users/kalebufton/opt/anaconda3/lib/python3.9/ast.py:50 in parse                                 โ”‚
โ”‚                                                                                                  โ”‚
โ”‚     49 โ”‚   # Else it should be an int giving the minor version for 3.x.                          โ”‚
โ”‚ โฑ   50 โ”‚   return compile(source, filename, mode, flags,                                         โ”‚
โ”‚     51 โ”‚   โ”‚   โ”‚   โ”‚      _feature_version=feature_version)                                      โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ - /main.py                                                                                       โ”‚
โ”‚   โ–ฒ                                                                                              โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
SyntaxError: invalid syntax

Running example prompt in prompt.md file generates non-working chrome extension - Error: Could not load icon 'icon16.png' specified in 'icons'. Could not load manifest.

Tested several times, always with the same results.
I get no errors in the console.
Here is what I run -
modal run main.py --prompt prompt.md --model=gpt-4

Here is the result -
`โœ“ Initialized. View app at https://modal.com/apps/ap-2X0ikxpFtqmLoE63zZSiah
โœ“ Created objects.
โ”œโ”€โ”€ ๐Ÿ”จ Created generate_response.
โ”œโ”€โ”€ ๐Ÿ”จ Created mount /mnt/nas/developer/main.py
โ”œโ”€โ”€ ๐Ÿ”จ Created mount /mnt/nas/developer/utils.py
โ”œโ”€โ”€ ๐Ÿ”จ Created mount /mnt/nas/developer/constants.py
โ””โ”€โ”€ ๐Ÿ”จ Created generate_file.
hi its me, ๐Ÿฃthe smol developer๐Ÿฃ! you said you wanted:
a Chrome Manifest V3 extension that reads the current page, and offers a popup UI that has the page title+content and a textarea for a prompt (with a default value we
specify). When the user hits submit, it sends the page title+content to the Anthropic Claude API along with the up to date prompt to summarize it. The user can modify
that prompt and re-send the prompt+content to get another summary view of the content.

  • Only when clicked:
    • it injects a content script content_script.js on the currently open tab, and accesses the title pageTitle and main content (innerText) pageContent of the
      currently open page
      (extracted via an injected content script, and sent over using a storePageContent action)
    • in the background, receives the storePageContent data and stores it
    • only once the new page content is stored, then it pops up a full height window with a minimalistic styled html popup
    • in the popup script
      • the popup should display a 10px tall rounded css animated red and white candy stripe loading indicator loadingIndicator, while waiting for the anthropic api to
        return
        • with the currently fetching page title and a running timer in the center showing time elapsed since call started
        • do not show it until the api call begins, and hide it when it ends.
      • retrieves the page content data using a getPageContent action (and the background listens for the getPageContent action and retrieves that data) and displays
        the title at the top of the popup
      • check extension storage for an apiKey, and if it isn't stored, asks for an API key to Anthropic Claude and stores it.
      • at the bottom of the popup, show a vertically resizable form that has:
        • a 2 line textarea with an id and label of userPrompt
          • userPrompt has a default value of
            defaultPrompt = `Please provide a detailed, easy to read HTML summary of the given content`;
            ```js
        • a 4 line textarea with an id and label of stylePrompt
          • stylePrompt has a default value of
            defaultStyle = `Respond with 3-4 highlights per section with important keywords, people, numbers, and facts bolded in this HTML format:
            
            <h1>{title here}</h1>
            <h3>{section title here}</h3>
            <details>
              <summary>{summary of the section with <strong>important keywords, people, numbers, and facts bolded</strong> and key quotes repeated}</summary>
              <ul>
                <li><strong>{first point}</strong>: {short explanation with <strong>important keywords, people, numbers, and facts bolded</strong>}</li>
                <li><strong>{second point}</strong>: {same as above}</li>
                <li><strong>{third point}</strong>: {same as above}</li>
                <!-- a fourth point if warranted -->
              </ul>
            </details>
            <h3>{second section here}</h3>
            <p>{summary of the section with <strong>important keywords, people, numbers, and facts bolded</strong> and key quotes repeated}</p>
            <details>
              <summary>{summary of the section with <strong>important keywords, people, numbers, and facts bolded</strong> and key quotes repeated}</summary>
              <ul>
                <!-- as many points as warranted in the same format as above -->
              </ul>
            </details>
            <h3>{third section here}</h3>
            <!-- and so on, as many sections and details/summary subpoints as warranted -->
            
            With all the words in brackets replaced by the summary of the content. sanitize non visual HTML tags with HTML entities, so <template> becomes 

<template> but stays the same. Only draw from the source content, do not hallucinate. Finally, end with other questions that the user might want
answered based on this source content:

        <hr>
        <h2>Next prompts</h2>
        <ul>
          <li>{question 1}</li>
          <li>{question 2}</li>
          <li>{question 3}</li>
        </ul>`;
        ```js
  - and in the last row, on either side,
    - and a nicely styled submit button with an id of `sendButton` (tactile styling that "depresses" on click)
  - only when `sendButton` is clicked, calls the Anthropic model endpoint https://api.anthropic.com/v1/complete with: 
    - append the page title
    - append the page content
    - add the prompt which is a concatenation of
        ```js
        finalPrompt = `Human: ${userPrompt} \n\n ${stylePrompt} \n\n Assistant:`
        ```
    - and use the `claude-instant-v1` model (if `pageContent` is <70k words) or the `claude-instant-v1-100k` model (if more) 
    - requesting max tokens = the higher of (25% of the length of the page content, or 750 words)
    - if another submit event is hit while the previous api call is still inflight, cancel that and start the new one
- renders the Anthropic-generated result at the top of the popup in a div with an id of `content`

Important Details:

  • It has to run in a browser environment, so no Nodejs APIs allowed.

  • the return signature of the anthropic api is curl https://api.anthropic.com/v1/complete\
    -H "x-api-key: $API_KEY"
    -H 'content-type: application/json'
    -d '{
    "prompt": "\n\nHuman: Tell me a haiku about trees\n\nAssistant: ",
    "model": "claude-v1", "max_tokens_to_sample": 1000, "stop_sequences": ["\n\nHuman:"]
    }'
    {"completion":" Here is a haiku about trees:\n\nSilent sentinels, \nStanding solemn in the woods,\nBranches reaching
    sky.","stop":"\n\nHuman:","stop_reason":"stop_sequence","truncated":false,"log_id":"f5d95cf326a4ac39ee36a35f434a59d5","model":"claude-v1","exception":null}

  • in the string prompt sent to Anthropic, first include the page title and page content, and finally append the prompt, clearly vertically separated by spacing.

  • if the Anthropic api call is a 401, handle that by clearing the stored anthropic api key and asking for it again.

  • add styles to make sure the popup's styling follows the basic rules of web design, for example having margins around the body, and a system font stack.

  • style the popup body with but insist on body margins of 16 and a minimum width of 400 and
    height of 600.

debugging notes

inside of background.js, just take the getPageContent response directly

chrome.runtime.onMessage.addListener((request, sender, sendResponse) => {
  if (request.action === 'storePageContent') {
    // dont access request.pageContent
    chrome.storage.local.set({ pageContent: request }, () => {
      sendResponse({ success: true });
    });
  } else if (request.action === 'getPageContent') {
    chrome.storage.local.get(['pageContent'], (result) => {
      // dont access request.pageContent
      sendResponse(result);
    });
  }
  return true;
});

inside of popup.js, Update the function calls to requestAnthropicSummary
in popup.js to pass the apiKey:

chrome.storage.local.get(['apiKey'], (result) => {
  const apiKey = result.apiKey;
  requestAnthropicSummary(defaultPrompt, apiKey);
});

sendButton.addEventListener('click', () => {
  chrome.storage.local.get(['apiKey'], (result) => {
    const apiKey = result.apiKey;
    requestAnthropicSummary(userPrompt.value, apiKey);
  });
});

in popup.js, store the defaultPrompt at the top level.
also, give a HTML format to the anthropic prompt

89 tokens in prompt: You are an AI developer who is trying to write a p...
1785 tokens in prompt: a Chrome Manifest V3 extension that reads the curr...
[
"manifest.json",
"background.js",
"content_script.js",
"popup.html",
"popup.js",
"styles.css"
]
145 tokens in prompt: You are an AI developer who is trying to write a p...
1785 tokens in prompt: a Chrome Manifest V3 extension that reads the curr...
Shared dependencies:

  1. Exported variables:

    • defaultPrompt
    • defaultStyle
    • finalPrompt
  2. Data schemas:

    • storePageContent action data
    • getPageContent action data
  3. ID names of DOM elements:

    • loadingIndicator
    • userPrompt
    • stylePrompt
    • sendButton
    • content
  4. Message names:

    • storePageContent
    • getPageContent
  5. Function names:

    • requestAnthropicSummary
      shared_dependencies.md
      Shared dependencies:
  6. Exported variables:

    • defaultPrompt
    • defaultStyle
    • finalPrompt
  7. Data schemas:

    • storePageContent action data
    • getPageContent action data
  8. ID names of DOM elements:

    • loadingIndicator
    • userPrompt
    • stylePrompt
    • sendButton
    • content
  9. Message names:

    • storePageContent
    • getPageContent
  10. Function names:

    • requestAnthropicSummary
      2032 tokens in prompt: You are an AI developer who is trying to write a p...
      1952 tokens in prompt:
      We have broken up the program into per-file g...
      2032 tokens in prompt: You are an AI developer who is trying to write a p...
      1952 tokens in prompt:
      We have broken up the program into per-file g...
      2032 tokens in prompt: You are an AI developer who is trying to write a p...
      1954 tokens in prompt:
      We have broken up the program into per-file g...
      2032 tokens in prompt: You are an AI developer who is trying to write a p...
      1952 tokens in prompt:
      We have broken up the program into per-file g...
      2032 tokens in prompt: You are an AI developer who is trying to write a p...
      1952 tokens in prompt:
      We have broken up the program into per-file g...
      content_script.js
      // content_script.js

function getPageTitleAndContent() {
const pageTitle = document.title;
const pageContent = document.body.innerText;
return { pageTitle, pageContent };
}

chrome.runtime.sendMessage(
{ action: 'storePageContent', ...getPageTitleAndContent() },
(response) => {
if (response.success) {
console.log('Page content stored successfully');
} else {
console.error('Failed to store page content');
}
}
);
2032 tokens in prompt: You are an AI developer who is trying to write a p...
1952 tokens in prompt:
We have broken up the program into per-file g...
background.js
chrome.runtime.onInstalled.addListener(() => {
chrome.action.onClicked.addListener((tab) => {
chrome.scripting.executeScript({
target: { tabId: tab.id },
files: ['content_script.js'],
});
});
});

chrome.runtime.onMessage.addListener((request, sender, sendResponse) => {
if (request.action === 'storePageContent') {
chrome.storage.local.set({ pageContent: request }, () => {
sendResponse({ success: true });
chrome.action.setPopup({ popup: 'popup.html' }, () => {
chrome.action.openPopup();
});
});
} else if (request.action === 'getPageContent') {
chrome.storage.local.get(['pageContent'], (result) => {
sendResponse(result);
});
}
return true;
});
manifest.json
{
"manifest_version": 3,
"name": "Anthropic Claude Summarizer",
"version": "1.0.0",
"description": "A Chrome extension that summarizes web pages using the Anthropic Claude API.",
"permissions": ["activeTab", "storage"],
"action": {
"default_popup": "popup.html",
"default_icon": {
"16": "icon16.png",
"48": "icon48.png",
"128": "icon128.png"
}
},
"background": {
"service_worker": "background.js"
},
"content_scripts": [
{
"matches": ["<all_urls>"],
"js": ["content_script.js"],
"run_at": "document_idle"
}
],
"icons": {
"16": "icon16.png",
"48": "icon48.png",
"128": "icon128.png"
}
}
popup.html

<title>Anthropic Claude Summary</title>

User Prompt: <textarea id="userPrompt" rows="2"></textarea> Style Prompt: <textarea id="stylePrompt" rows="4"></textarea> Submit <script src="popup.js"></script> styles.css body { margin: 16px; min-width: 400px; min-height: 600px; }

#loadingIndicator {
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
width: 100%;
height: 10px;
overflow: hidden;
background-color: #f0f0f0;
border-radius: 5px;
}

#loadingIndicator:before {
content: "";
position: absolute;
top: 0;
left: 0;
width: 50%;
height: 100%;
background: linear-gradient(45deg, #f06, transparent);
animation: loading 1s infinite;
}

@Keyframes loading {
0% {
transform: translateX(-100%);
}
100% {
transform: translateX(100%);
}
}

textarea {
resize: vertical;
}

#sendButton {
background-color: #4CAF50;
border: none;
color: white;
padding: 8px 16px;
text-align: center;
text-decoration: none;
display: inline-block;
font-size: 16px;
margin: 4px 2px;
cursor: pointer;
border-radius: 4px;
transition: background-color 0.3s, transform 0.3s;
}

#sendButton:hover {
background-color: #45a049;
}

#sendButton:active {
transform: translateY(2px);
}
popup.js
document.addEventListener('DOMContentLoaded', () => {
const loadingIndicator = document.getElementById('loadingIndicator');
const userPrompt = document.getElementById('userPrompt');
const stylePrompt = document.getElementById('stylePrompt');
const sendButton = document.getElementById('sendButton');
const content = document.getElementById('content');

const defaultPrompt = 'Please provide a detailed, easy to read HTML summary of the given content';
const defaultStyle = `Respond with 3-4 highlights per section with important keywords, people, numbers, and facts bolded in this HTML format:

{title here}

{section title here}

{summary of the section with important keywords, people, numbers, and facts bolded and key quotes repeated}
  • {first point}: {short explanation with important keywords, people, numbers, and facts bolded}
  • {second point}: {same as above}
  • {third point}: {same as above}

{second section here}

{summary of the section with important keywords, people, numbers, and facts bolded and key quotes repeated}

{summary of the section with important keywords, people, numbers, and facts bolded and key quotes repeated}

{third section here}

With all the words in brackets replaced by the summary of the content. sanitize non visual HTML tags with HTML entities, so becomes <template> but
stays the same. Only draw from the source content, do not hallucinate. Finally, end with other questions that the user might want answered based on this
source content:


Next prompts

  • {question 1}
  • {question 2}
  • {question 3}
`;

userPrompt.value = defaultPrompt;
stylePrompt.value = defaultStyle;

function requestAnthropicSummary(prompt, apiKey) {
loadingIndicator.style.display = 'block';

chrome.runtime.sendMessage({ action: 'getPageContent' }, (response) => {
  const { pageTitle, pageContent } = response.pageContent;
  const finalPrompt = `Human: ${prompt} \n\n ${stylePrompt.value} \n\n Assistant:`;

  const model = pageContent.length < 70000 ? 'claude-instant-v1' : 'claude-instant-v1-100k';
  const maxTokens = Math.max(Math.ceil(pageContent.length * 0.25), 750);

  fetch('https://api.anthropic.com/v1/complete', {
    method: 'POST',
    headers: {
      'x-api-key': apiKey,
      'content-type': 'application/json',
    },
    body: JSON.stringify({
      prompt: `${pageTitle}\n\n${pageContent}\n\n${finalPrompt}`,
      model,
      max_tokens_to_sample: maxTokens,
      stop_sequences: ['\n\nHuman:'],
    }),
  })
    .then((res) => {
      if (res.status === 401) {
        chrome.storage.local.remove('apiKey');
        throw new Error('Invalid API key');
      }
      return res.json();
    })
    .then((data) => {
      content.innerHTML = data.completion;
      loadingIndicator.style.display = 'none';
    })
    .catch((error) => {
      console.error(error);
      loadingIndicator.style.display = 'none';
    });
});

}

chrome.storage.local.get(['apiKey'], (result) => {
if (!result.apiKey) {
const apiKey = prompt('Please enter your Anthropic Claude API key:');
chrome.storage.local.set({ apiKey }, () => {
requestAnthropicSummary(defaultPrompt, apiKey);
});
} else {
requestAnthropicSummary(defaultPrompt, result.apiKey);
}
});

sendButton.addEventListener('click', () => {
chrome.storage.local.get(['apiKey'], (result) => {
requestAnthropicSummary(userPrompt.value, result.apiKey);
});
});
});
โœ“ App completed.`

Beautiful right? No errors.
But when I go into the generated directory, I see these files -
background.js content_script.js manifest.json popup.html popup.js shared_dependencies.md styles.css

And in manifest.json, I see this -
{ "manifest_version": 3, "name": "Anthropic Claude Summarizer", "version": "1.0.0", "description": "A Chrome extension that summarizes web pages using the Anthropic Claude API.", "permissions": ["activeTab", "storage"], "action": { "default_popup": "popup.html", "default_icon": { "16": "icon16.png", "48": "icon48.png", "128": "icon128.png" } }, "background": { "service_worker": "background.js" }, "content_scripts": [ { "matches": ["<all_urls>"], "js": ["content_script.js"], "run_at": "document_idle" } ], "icons": { "16": "icon16.png", "48": "icon48.png", "128": "icon128.png" } }

Those icon files, icon16.png, icon48.png, and icon128.png don't exist. So when I try to load the plugin extension in chrome, I get the error -
Could not load icon 'icon16.png' specified in 'icons'.
Could not load manifest.

Use with existing project

Hey - How can I use this with my existing project? Is it possible to make smol go through my project to understand and then make modifications based on requirements?

how to run with gpt3.5

i have access to gpt4 but i'd rather not use it as I could imagine smol-developer, especially in its early development stage, would use a lot of token's and make many requests to ChatGPT.

I am trying to run this command:

modal run main.py --prompt prompt.md --model=gpt-3.5

I am also running into the issue where even if I do run it normally with:

modal run main.py --prompt prompt.md

It gives me this error:

InvalidRequestError: The model: `gpt-4` does not exist

Yes, I have access to GPT4 aswell as paid rate usage and yes I have access to the modal library.
Does anyone know of a place within the smol-developer repository where i can change the model to gpt3.5?

Traceback errors and KeyError

Hi guys! I'm still very new to coding, so please be patient with me. I followed all of the instructions to run Smol-ai and put my api key from openai into the area: OPENAI_API_KEY=

After prompting Smol-ai It tells me:

hi its me, ๐Ÿฃthe smol developer๐Ÿฃ! you said you wanted: "My_Prompt" in text

But then I get the following errors

Traceback (most recent call last):
File "C:\A.I\Smol Ai Developer\developer\main_no_modal.py", line 230, in
main(prompt, directory, file)
File "C:\A.I\Smol Ai Developer\developer\main_no_modal.py", line 121, in main
filepaths_string = generate_response(
File "C:\A.I\Smol Ai Developer\developer\main_no_modal.py", line 26, in generate_response
openai.api_key = os.environ["OPENAI_API_KEY"]
File "C:\Users\Zove\AppData\Local\Programs\Python\Python310\lib\os.py", line 680, in getitem
raise KeyError(key) from None
KeyError: 'OPENAI_API_KEY'

I signed up for Modal but am still on the waitlist. So I'm using the: python main_no_modal.py " " --model=gpt-4

I also have access to gpt-4 through the paid subscription as well

Any help would be appreciated and thanks!! :D

Plugins/extensions

I saw that OpenAI just released ChatGPT plugins to everyone.
This includes: wolfram alpha, browsing, ambition, zapier....

I've been using smol-ai for a few days and I think it's awesome.
Are there any plans to integrate plugins in the project (is it even possible to access these features through the API?)

Do you need help with that?

Cheers //

Keep hitting the timeout

856 tokens in prompt: You are an AI developer who is trying to write a p
383 tokens in prompt: 
    We have broken up the program into per-file g
Task's current input in-asvotoKZmysBBhM8gYwuO7 hit its timeout of 300s
Task's current input in-2faqX2wOstABbDFnaz7bL4 hit its timeout of 300s
Task's current input in-V1wHgq9MirPebunNHpsAbD hit its timeout of 300s
Task's current input in-tgHALPxITM0zESibMZcZMi hit its timeout of 300s
Task's current input in-dFkDAszKCVLVtxaDYPyLYK hit its timeout of 300s
Task's current input in-wb1AOmhAJi6AE42fP69QSZ hit its timeout of 300s
Task's current input in-cM4rYTeziKeLTEvij4Le7Z hit its timeout of 300s
Task's current input in-TyTG7gTy0hahqCcGUqfCMN hit its timeout of 300s
Task's current input in-ppHAaueBWIDGHNcMI76IyA hit its timeout of 300s
Runner terminated, in-progress inputs will be re-scheduled
Runner terminated, in-progress inputs will be re-scheduled
....
โ”‚ /usr/local/lib/python3.10/site-packages/modal/functions.py:437 in          โ”‚
โ”‚ fetch_output                                                               โ”‚
โ”‚                                                                            โ”‚
โ”‚    436 โ”‚   โ”‚   try:                                                        โ”‚
โ”‚ โฑ  437 โ”‚   โ”‚   โ”‚   output = await _process_result(item.result, client.stub โ”‚
โ”‚    438 โ”‚   โ”‚   except Exception as e:                                      โ”‚
โ”‚                                                                            โ”‚
โ”‚ /usr/local/lib/python3.10/site-packages/modal/functions.py:109 in          โ”‚
โ”‚ _process_result                                                            โ”‚
โ”‚                                                                            โ”‚
โ”‚    108 โ”‚   if result.status == api_pb2.GenericResult.GENERIC_STATUS_TIMEOU โ”‚
โ”‚ โฑ  109 โ”‚   โ”‚   raise _TimeoutError(result.exception)                       โ”‚
โ”‚    110 โ”‚   elif result.status != api_pb2.GenericResult.GENERIC_STATUS_SUCC โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
TimeoutError: Task's current input in-2faqX2wOstABbDFnaz7bL4 hit its timeout 
of 300s

I've change things in main.py:

@stub.function(
    image=openai_image,
    secret=modal.Secret.from_dotenv(),
    retries=modal.Retries(
        max_retries=3,
        backoff_coefficient=2.0,
        initial_delay=1.0,
    ),
    concurrency_limit=2,
    timeout=600,
)
def generate_response(system_prompt, user_prompt, *args):

I use modal.

InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 7575 tokens. Please reduce the length of the messages.

While following the workflow shown in the YouTube video, I run into this error.

I had run a prompt and gotten an error. I tried running debugger on the traceback and it errors out.

Command is here:
modal run debugger.py --prompt "codespace โžœ /workspaces/developer (main) $ /home/codespace/.python/current/bin/python3 /workspaces/developer/generated/main.py Enter the path to the CSV file: data.csv Traceback (most recent call last): File "/workspaces/developer/generated/main.py", line 16, in <module> main() File "/workspaces/developer/generated/main.py", line 10, in main column_names, data_rows = csv_parser.parse_csv(csv_file_path) TypeError: parse_csv() missing 1 required positional argument: 'column_indices'"

Error is here:


โœ“ Initialized. View app at https://modal.com/apps/ap-P7vk4JqfCQJSOKa1GkPKfC
โœ“ Created objects.
โ”œโ”€โ”€ ๐Ÿ”จ Created generate_response.
โ””โ”€โ”€ ๐Ÿ”จ Created mount /workspaces/developer/debugger.py
Traceback (most recent call last):
  File "/pkg/modal/_container_entrypoint.py", line 330, in handle_input_exception
    yield
  File "/pkg/modal/_container_entrypoint.py", line 403, in call_function_sync
    res = fun(*args, **kwargs)
  File "/root/debugger.py", line 80, in generate_response
    response = openai.ChatCompletion.create(**params)
  File "/usr/local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 230, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 624, in _interpret_response
    self._interpret_response_line(
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 7575 tokens. Please reduce the length of the messages.
Traceback (most recent call last):
  File "/pkg/modal/_container_entrypoint.py", line 330, in handle_input_exception
    yield
  File "/pkg/modal/_container_entrypoint.py", line 403, in call_function_sync
    res = fun(*args, **kwargs)
  File "/root/debugger.py", line 80, in generate_response
    response = openai.ChatCompletion.create(**params)
  File "/usr/local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 230, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 624, in _interpret_response
    self._interpret_response_line(
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 7575 tokens. Please reduce the length of the messages.
Traceback (most recent call last):
  File "/pkg/modal/_container_entrypoint.py", line 330, in handle_input_exception
    yield
  File "/pkg/modal/_container_entrypoint.py", line 403, in call_function_sync
    res = fun(*args, **kwargs)
  File "/root/debugger.py", line 80, in generate_response
    response = openai.ChatCompletion.create(**params)
  File "/usr/local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 230, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 624, in _interpret_response
    self._interpret_response_line(
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 7575 tokens. Please reduce the length of the messages.
Traceback (most recent call last):
  File "/pkg/modal/_container_entrypoint.py", line 330, in handle_input_exception
    yield
  File "/pkg/modal/_container_entrypoint.py", line 403, in call_function_sync
    res = fun(*args, **kwargs)
  File "/root/debugger.py", line 80, in generate_response
    response = openai.ChatCompletion.create(**params)
  File "/usr/local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 230, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 624, in _interpret_response
    self._interpret_response_line(
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 7575 tokens. Please reduce the length of the messages.
โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ /home/codespace/.python/current/bin/modal:8 in <module>                                          โ”‚
โ”‚                                                                                                  โ”‚
โ”‚   7 โ”‚   sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])                         โ”‚
โ”‚ โฑ 8 โ”‚   sys.exit(main())                                                                         โ”‚
โ”‚   9                                                                                              โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /usr/local/python/3.10.4/lib/python3.10/site-packages/modal/__main__.py:6 in main                โ”‚
โ”‚                                                                                                  โ”‚
โ”‚    5 def main():                                                                                 โ”‚
โ”‚ โฑ  6 โ”‚   entrypoint_cli()                                                                        โ”‚
โ”‚    7                                                                                             โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /usr/local/python/3.10.4/lib/python3.10/site-packages/click/core.py:1130 in __call__             โ”‚
โ”‚                                                                                                  โ”‚
โ”‚   1129 โ”‚   โ”‚   """Alias for :meth:`main`."""                                                     โ”‚
โ”‚ โฑ 1130 โ”‚   โ”‚   return self.main(*args, **kwargs)                                                 โ”‚
โ”‚   1131                                                                                           โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /usr/local/python/3.10.4/lib/python3.10/site-packages/typer/core.py:778 in main                  โ”‚
โ”‚                                                                                                  โ”‚
โ”‚   777 โ”‚   ) -> Any:                                                                              โ”‚
โ”‚ โฑ 778 โ”‚   โ”‚   return _main(                                                                      โ”‚
โ”‚   779 โ”‚   โ”‚   โ”‚   self,                                                                          โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /usr/local/python/3.10.4/lib/python3.10/site-packages/typer/core.py:216 in _main                 โ”‚
โ”‚                                                                                                  โ”‚
โ”‚   215 โ”‚   โ”‚   โ”‚   with self.make_context(prog_name, args, **extra) as ctx:                       โ”‚
โ”‚ โฑ 216 โ”‚   โ”‚   โ”‚   โ”‚   rv = self.invoke(ctx)                                                      โ”‚
โ”‚   217 โ”‚   โ”‚   โ”‚   โ”‚   if not standalone_mode:                                                    โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /usr/local/python/3.10.4/lib/python3.10/site-packages/click/core.py:1657 in invoke               โ”‚
โ”‚                                                                                                  โ”‚
โ”‚   1656 โ”‚   โ”‚   โ”‚   โ”‚   with sub_ctx:                                                             โ”‚
โ”‚ โฑ 1657 โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   return _process_result(sub_ctx.command.invoke(sub_ctx))               โ”‚
โ”‚   1658                                                                                           โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /usr/local/python/3.10.4/lib/python3.10/site-packages/click/core.py:1657 in invoke               โ”‚
โ”‚                                                                                                  โ”‚
โ”‚   1656 โ”‚   โ”‚   โ”‚   โ”‚   with sub_ctx:                                                             โ”‚
โ”‚ โฑ 1657 โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   return _process_result(sub_ctx.command.invoke(sub_ctx))               โ”‚
โ”‚   1658                                                                                           โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /usr/local/python/3.10.4/lib/python3.10/site-packages/click/core.py:1404 in invoke               โ”‚
โ”‚                                                                                                  โ”‚
โ”‚   1403 โ”‚   โ”‚   if self.callback is not None:                                                     โ”‚
โ”‚ โฑ 1404 โ”‚   โ”‚   โ”‚   return ctx.invoke(self.callback, **ctx.params)                                โ”‚
โ”‚   1405                                                                                           โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /usr/local/python/3.10.4/lib/python3.10/site-packages/click/core.py:760 in invoke                โ”‚
โ”‚                                                                                                  โ”‚
โ”‚    759 โ”‚   โ”‚   โ”‚   with ctx:                                                                     โ”‚
โ”‚ โฑ  760 โ”‚   โ”‚   โ”‚   โ”‚   return __callback(*args, **kwargs)                                        โ”‚
โ”‚    761                                                                                           โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /usr/local/python/3.10.4/lib/python3.10/site-packages/click/decorators.py:26 in new_func         โ”‚
โ”‚                                                                                                  โ”‚
โ”‚    25 โ”‚   def new_func(*args, **kwargs):  # type: ignore                                         โ”‚
โ”‚ โฑ  26 โ”‚   โ”‚   return f(get_current_context(), *args, **kwargs)                                   โ”‚
โ”‚    27                                                                                            โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /usr/local/python/3.10.4/lib/python3.10/site-packages/modal/cli/run.py:115 in f                  โ”‚
โ”‚                                                                                                  โ”‚
โ”‚   114 โ”‚   โ”‚   โ”‚   else:                                                                          โ”‚
โ”‚ โฑ 115 โ”‚   โ”‚   โ”‚   โ”‚   func(*args, **kwargs)                                                      โ”‚
โ”‚   116 โ”‚   โ”‚   โ”‚   if app.function_invocations == 0:                                              โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /workspaces/developer/debugger.py:40 in main                                                     โ”‚
โ”‚                                                                                                  โ”‚
โ”‚   39   prompt += "\n\nGive me ideas for what could be wrong and what fixes to do in which fil    โ”‚
โ”‚ โฑ 40   res = generate_response.call(system, prompt, model)                                       โ”‚
โ”‚   41   # print res in teal                                                                       โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /usr/local/python/3.10.4/lib/python3.10/site-packages/synchronicity/synchronizer.py:443 in       โ”‚
โ”‚ proxy_method                                                                                     โ”‚
โ”‚                                                                                                  โ”‚
โ”‚                 ...Remote call to Modal Function (ta-WXXcuOtyiK1GQo8XrNPZ8k)...                  โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /root/debugger.py:80 in generate_response                                                        โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ โฑ 80 response = openai.ChatCompletion.create(**params)                                           โ”‚
โ”‚                                                                                                  โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /usr/local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py:25 in create     โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ โฑ 25 return super().create(*args, **kwargs)                                                      โ”‚
โ”‚                                                                                                  โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /usr/local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py:153 โ”‚
โ”‚ in create                                                                                        โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ โฑ 153 response, _, api_key = requestor.request(                                                  โ”‚
โ”‚                                                                                                  โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /usr/local/lib/python3.10/site-packages/openai/api_requestor.py:230 in request                   โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ โฑ 230 resp, got_stream = self._interpret_response(result, stream)                                โ”‚
โ”‚                                                                                                  โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /usr/local/lib/python3.10/site-packages/openai/api_requestor.py:624 in _interpret_response       โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ โฑ 624 self._interpret_response_line(                                                             โ”‚
โ”‚                                                                                                  โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ /usr/local/lib/python3.10/site-packages/openai/api_requestor.py:687 in _interpret_response_line  โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ โฑ 687 raise self.handle_error_response(                                                          โ”‚
โ”‚                                                                                                  โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 7575 tokens. Please reduce the length of the messages

Have GPT-4 API access, running in a DevContainer in Visual Studio Code on Windows 10.

I am not sure how to see exactly what it is outputting to hit this 7575 tokens -- are there any debug steps I can take?

My debug prompt is quite short, it is a short traceback.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.