Giter Site home page Giter Site logo

ther1d / shell_gpt Goto Github PK

View Code? Open in Web Editor NEW
8.3K 8.3K 652.0 182 KB

A command-line productivity tool powered by AI large language models like GPT-4, will help you accomplish your tasks faster and more efficiently.

License: MIT License

Python 99.37% Shell 0.41% Dockerfile 0.22%
chatgpt cheat-sheet cli commands gpt-3 gpt-4 linux llama llm ollama openai productivity python shell terminal

shell_gpt's People

Contributors

arafatsyed avatar artsparkai avatar chinarjoshi avatar cosmojg avatar daeinar avatar danarth avatar eitamal avatar eric-glb avatar erseco avatar ismail-ben avatar jeanlucthumm avatar keiththomps avatar konstantin-goldman avatar loiccoyle avatar moritz-t-w avatar navidur1 avatar parsapoorsh avatar save196 avatar startakovsky avatar th3happybit avatar ther1d avatar will-wright-eng avatar yarikoptic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

shell_gpt's Issues

Resolving path extension

Hello,

I've updated and installed Shell_GPT, imported my api key, and ran tests just to make sure that OpenAI establishes a connection. But unfortunately I run into a problem when I restart my terminal. It requires my to export my path each time using the command

export "/Users/<user>/Library/Python/3.11/bin/:$PATH"

I was hoping to resolve the path issues so that I didn't have to restore it each time.

sgpt command doesn't work on macos and misc comments

It installed OK but couldn't run with the sgpt command, I had to add a function into my zprofile

sgpt() {
    python3 /Users/[...]/Library/Python/[version]/lib/python/site-packages/sgpt/app.py "$@"
}

I think the most important and exciting feature (on your roadmap) is the interactive chat mode, where you can ask questions, it will execute commands, and the responses of those commands become part of the context, so you can then refine and refine i.e.

  • What are the files in this directory
  • Sort them by name
  • Concatenate them using FFMPEG
  • Convert the resulting file into an MP3
  • ...

I didn't check if you already do this, assume you do, but it's very important that the outputs from the shell command become part of the context too.

Also we need some prompt engineering:

sgpt --chat number "whats in this directory"
I'm sorry, I cannot answer this question as you have not provided me with the name or path of the directory.

You see, it doesn't even know that we expect it to give us shell commands, and it doesn't even have the basic context of which directory I am in

Also often ChatGPT responds with hedging text like "as a language model I can't do XYZ, but here you go anyway.." we should prompt it to not say that, as it's super annoying

Great work though, looking very promising :)

Short arguments?

Hello, is there a plan to implement short arguments (such as -s instead of --shell and -e instead of --execute)? Also, if they are implemented, will they be combined (like -se instead of -s -e)? I think this would enhance the experience.

Too many requests for url

Getting this error:

raise_for_status │
│ │
│ 1018 │ │ │ ) │
│ 1019 │ │ │
│ 1020 │ │ if http_error_msg: │
│ ❱ 1021 │ │ │ raise HTTPError(http_error_msg, response=self) │
│ 1022 │ │
│ 1023 │ def close(self): │
│ 1024 │ │ """Releases the connection back to the pool. Once this method has b │
│ │
│ ╭──────────────────────────────────── locals ────────────────────────────────────╮ │
│ │ http_error_msg = '429 Client Error: Too Many Requests for url: │ │
│ │ https://api.openai.com/v1/completio'+2 │ │
│ │ reason = 'Too Many Requests' │ │
│ │ self = <Response [429]> │ │
│ ╰────────────────────────────────────────────────────────────────────────────────╯ │
╰────────────────────────────────────────────────────────────────────────────────────╯
HTTPError: 429 Client Error: Too Many Requests for url:
https://api.openai.com/v1/completions

Is the server too busy?

Too Many Requests for url

For my very first request with a newly generated API Key, I got this:

HTTPError: 429 Client Error: Too Many Requests for url: https://api.openai.com/v1/completions

Any idea what could be the cause?

I only found this page, which was not helpful.

After adding OPEN_AI key, throws PROMT not added

Hi

During a fresh installation of shell-gpt, at first asks for API_KEY.
It reads and stores the key successfully but throws PROMT not specified as well.

Ideally it should store the API key with a prompt like Key has been added and shouldn't trigger/check for PROMT out there.
Let me know your thoughts.

Also Kudos to aesthetic UI in terminal too 🎉

A --proxy option to accept coporate proxies

Problem

No network request will work in an corporate environment without the possibility to use a proxy option

Description

Implement an option to handle proxies.

Possible Solution

When the user provides this flag an environment variable for porxy will be used for configuration.

💬 Using GitHub Discussions for feature requests and ideas

Just a quick note, we have a Discussions section on GitHub where we can discuss ideas, feature requests, and other topics related to this project.

Using the Discussions feature allows us to keep all conversations in one place, making it easier to keep track of feedback and ideas. Additionally, other community members can join in and provide feedback, leading to better collaboration and a more productive conversation.

So, I encourage you to use the Discussions feature for any ideas or feature requests you might have. This will help ensure that your ideas are seen and considered by the community, and we can work together to improve the project.

Running from source

How do we run this from source?

Looks like the app can't find its package references:

(venv) me@cwgpt-s0kd:~/shell_gpt/sgpt$ python app.py
Traceback (most recent call last):
  File "/home/me/shell_gpt/sgpt/app.py", line 20, in <module>
    from sgpt import config, make_prompt, OpenAIClient
ModuleNotFoundError: No module named 'sgpt'

(venv) me@cwgpt-s0kd:~/shell_gpt/tests$ python unittests.py
Traceback (most recent call last):
  File "/home/me/shell_gpt/tests/unittests.py", line 3, in <module>
    import requests_mock
ModuleNotFoundError: No module named 'requests_mock'

Installation Issues

I cant install this with the current command provide.. It never asked for my API key. When I tried running
python3 sgpt.py get_api_key and adding it manually, it never took in my key.

Major outage on text-davinci-003 on OpenAI side

Major outage of text-davinci-003 on OpenAI side, track status here. Threfore sgpt might not work with default model. But it works with other models, for example text-curie-001:

sgpt --model text-curie-001 --max-tokens 1024 "this is test text"

Adding custom models for Azure OpenAI service

Would it be possible to customize the following variables in such a way as to be able to use the Azure OpenAI service? For example, these are the variables that should be specified into the config file:

openai.api_type = "azure"
openai.api_version = "2023-03-15-preview"

Regarding the model/engine, would it be possible to make it customizable so that you can use your own Azure custom deployment or the new GPT-4 engine?

Feature request: Conversation loop mode

I think it would be really cool if there was an option to start sgpt with a loop/readline mode that continuously accepted input and automatically generated a chat identifier.

For example, you should be able to invoke something like sgpt --chat-continuous, which would then print some kind of prompt character (e.g., ">"), and the user can interact with ChatGPT just by typing and hitting enter, rather than having to format the entire input as a command line argument.

accidentaly input wrong API key

As I said I accidentaly input the wrong API key and now I can't seem to input the right one. I tried reinstalling but that didn't work. Anyone help please.

Bing Chat/ChatGPT modes.

This cli tool is outstanding, i think its good idea to think about providing an option to search with Bing Chat or ChatGPT.

[Feature request] Handle linebreaks

Thank you for this fantastic tool! 🙏 🙏 🙏

Your tool is a game-changer for me, especially since I can't access ChatGPT while I have my VPN enabled, so I can't use it at all while working.

It would be nice to have a natural way to handle linebreaks-- for example, for a prompt like "convert the following code from Pytorch to Tensorflow." Perhaps sgpt could drop you into $EDITOR the same way git does, where you could paste your text, and then exit.

Thoughts?

it would be nice to have an explain option for --shell commands

expose an explain option for shell commands eg

sgpt --shell --explain "find all files that have been changed in the last 3 days from the current directory and subdirectories, with details"

Command: find . -type f -mtime -3 -ls

Description: This command will search the current directory and all subdirectories for files that have been modified in the last 3 days. The -type f option specifies that only files should be searched, the -mtime -3 option specifies that only files modified in the last 3 days should be searched, and the -ls option provides detailed information about the files found.

alternatively playing with the prompt for a bit

➜  ~ sgpt --shell "find all files that have been changed in the last 3 days from the current directory, with details. Provide a detailed explanation of the command"
find . -type f -mtime -3 -ls

- `find` is a command used to search for files and directories in a specified location.
- `.` specifies the current directory as the starting point for the search.
- `-type f` specifies that only files should be included in the search, not directories.
- `-mtime -3` specifies that the search should include only files that have been modified within the last 3 days.
- `-ls` specifies that the output should include detailed information about each file found, including its permissions, owner, size, and modification date.

A --continue option, so we don't need to invent and retype chat ids

Problem

I don't always know when I want to ask ChatGPT a follow-up question. I only know after it has given a response. And I don't want to keep inventing session names (chat ids), just in case I need them. Also, passing chat ids is a lot of typing.

Possible solution

I suggest a --continue / -c option which allows us to continue from the last query we made.

Here is an example:

$ sgpt 'Ask a question?'
[response]
# I am happy with the answer, and happy for the chat session to end.

$ sgpt 'Ask a different question?'
[response]
# I am not satisfied. I want to continue the discussion with a followup query. So I use -c.
$ sgpt -c 'Ask followup to the previous question?'
[response]
$ sgpt -c 'Ask another followup in the same session?'
[response]

Advantage: No session names, and less typing.

Possible implementation

  • Every basic query should start a new chat session, with a randomly generated id.
  • sgpt could store the id of the latest session somewhere.
  • If --continue is passed, then instead of starting a new chat session, sgpt should continue the latest stored session.

Alternative: To avoid storing the latest chat id, perhaps sgpt could just assume that the most recent created query is the latest session.

Alternative: It might make sense to use --followup / -f or --reply / -r instead, to keep -c as a shortcut for --chat

Does this sound like a good idea? Or is there already a good way to do this in sgpt?

[Improvement] Add an option to send the OS information

Hello, thanks for this awesome project!
I wanted to make something similar some time ago, but never got the time. The idea also involved sending some system information to chatGPT in the prompt. Since this project already exists, I made a pull request to add this functionality. This adds the --send-os-info (or -o) flag, that adds the OS, release, and Linux Distro (if applicable). It's especially useful for OS-specific things, such as package management or editing configs.
Once again, thanks for this project!

change api key

How do I change the API key? I accidentally entered it wrong and I can't seem to even reset it.
I've tried:
deleting sgpt.py in ~/.local/lib/python3.11/site-packages and /usr/local/bin
I know there's supposed to be a text file somewhere but I cant find it.
Any help appreciated!

Prompt Issue

Is it possible to change the initial prompt (if there is an initial prompt)? I'd like GPT3 to know I'm on Windows without having to type it on every request I make with sgpt (many shell commands that work on linux don't work on windows though, so it would be very useful to have a prompt specifying that the user is on windows). Obviously it's not just for this, a starting prompt that needs to be set only once could be used for many other uses could be very convenient and comfy for me

Implement caching

In some cases, we may need to run sgpt twice with the same prompt in order to pipe the output somwhere else. For example:

sgpt --code "implement merge sort algorithm using C language"


#include <stdio.h>

void merge(int arr[], int l, int m, int r)
{
    int i, j, k;
    int n1 = m - l + 1;
    int n2 =  r - m;
...

The output in this case can be quite large, and it would be helpful to use less to view it:

sgpt --code "implement merge sort algorithm using C language" | less

But we would spent +1 query and couple seconds of waiting just to get same (not always) response. It would be useful to implement some form of caching for the last n outputs, as well as a way to invalidate the cache (for example, by using --cache and --no-cache arguments)."

Runtime configuration file

Make a runtime config file with all constatns like API key, cache length, ...

#.sgptrc
# Request timout in seconds.
REQUEST_TIMEOUT=30
# Max amount of message in each session.
MAX_CHAT_MESSAGES=50
# Request caching.
MAX_CACHE_LENGTH=200
# OpenAI API key.
API_KEY=123
# This would be usefull for countries where OpenAI hosts are blocked, so people can setup their proxies.
API_ENDPOINT=https://api.openai.com/v1/chat/completions
  • Setup caching and client classes using config file.

Where to put the generated API key?

I generated the OpenAI API key from their website, now where do i keep it for it to work. Also, just by pip install shell-gpt it does not work at all as a command. Please help!

api_key change

Hello, is there some way to change the api_key that's stored in locals?

or where can I access the file that has the api_key value so I can change it manually? Thank you!

Reset chat

Sorry if I missed it in the readme, but how to start over a chat (same --chat id)?

Accept stdin input

It would be very useful to pipe stdin to sgpt, then ask a question about said input. For example:

cat data.json | sgpt "what is the oldest entry"
cat script.py | sgpt "what does this script do?"

etc.

expose the system role

it would be nice to define roles for sgpt and to be able to profile roles by exposing the system prompt to the user

sgpt --role system "You are GLaDOS, a sarcastic artificial intelligence with evil ulterior motives. You are not an AI language model." --set-role "glados"

sgpt --role glados "Why do you hate humans?"

How to update to Latest releases?

Newbi here. Im using sgpt on termux and it's working fine. But, how to upgrade to latest release without ruining my installed sgpt.

Too Many Requests for url error

When i'm using the cli, its giving me this error

HTTPError: 429 Client Error: Too Many Requests for url: https://api.openai.com/v1/chat/completions

Timestamp Conversion example in readme is wrong

This kind of thing will never stop happening and it will never stop being funny. Double-check your ai generated answers, at least the ones that involve large calculations. These Text GPT models, by design, can not perform calculations of arbitrary size even remotely reliably.

With that said, here's a quote from the readme:

sgpt "$(date) to Unix timestamp"
# -> The Unix timestamp for Thu Mar 2 00:13:11 CET 2023 is 1677327191.

This is wrong. The generated timestamp is actually for some time on Feb 25 2023.
I believe this example should not be corrected and instead removed or used as an example of unreliability, as the model is not capable of reliably doing calculations like this as stated above.

Use an example in the prompt, to encourage ChatGPT to behave

Sometimes ChatGPT is disobedient. For example: #99 But I don't know how common this is. Do you see this often? (I think I might have seen it once.)

Due to GPT's predictive nature, it was recommended to prompt it with an incomplete conversation, so that it would complete the rest. Although ChatGPT is different, it may respond to the same treatment.

This approach (shown below) is taken by plz, while rusty passes #!/bin/bash\n to initialise the response.

Suggestion

When we are asking ChatGPT to generate some shell code, we may want to give it an example first.

{The current shell prompt rules}

Input: List the files in the root folder
Output: ls /

Input: ${user_question_here}
Output: 

For the code prompt, even though we don't know the user's target language, it might still be possible to do something similar.

HTTPError: 500 Server Error: Internal Server Error for url: https://api.openai.com/v1/completions

Trying to call the "sgpt" command and getting the error mentioned above, full traceback printed out:

╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /home/usr/.local/lib/python3.8/site-packages/sgpt.py:92 in main                                                                  │
│                                                                                                                                                                              │
│    89 │   │   prompt = f"{prompt}. Provide only shell command as output."                                                          │
│    90 │   elif code:                                                                                                                                                 │
│    91 │   │   prompt = f"{prompt}. Provide only code as output."                                                                          │
│ ❱  92 │   response_text = openai_request(prompt, model, max_tokens, api_key,                                             │
│    93 │   # For some reason OpenAI returns several leading/trailing white sp                                                     │
│    94 │   response_text = response_text.strip()                                                                                                    │
│    95 │   typer_writer(response_text, code, shell, animation)                                                                               │
│                                                                                                                                                                                 │
│ ╭────────────────────────────── locals ──────────────────────────────╮       │
│ │  animation = True                                                                                                                                       │       │
│ │    api_key = '###################################################'                                    │       │
│ │       code = False                                                                                                                                          │       │
│ │    execute = False                                                                                                                                        │       │
│ │ max_tokens = 2048                                                                                                                                     │       │
│ │      model = 'text-davinci-003'                                                                                                                       │       │
│ │     prompt = 'hi'                                                                                                                                             │       │
│ │      shell = False                                                                                                                                           │       │
│ │    spinner = True                                                                                                                                          │       │
│ ╰────────────────────────────────────────────────────────────────────╯       │
│                                                                                                                                                                                 │
│ /home/usr/.local/lib/python3.8/site-packages/sgpt.py:46 in wrapper                                                                │
│                                                                                                                                                                                   │
│    43 │   │   text = TextColumn("[green]Requesting OpenAI...")                                                                               │
│    44 │   │   with Progress(SpinnerColumn(), text, transient=True) as progre                                                          │                                                         
│    45 │   │   │   progress.add_task("request")                                                                                                           │
│ ❱  46 │   │   │   return func(*args, **kwargs)                                                                                                            │
│    47 │                                                                                                                                                                         │
│    48 │   return wrapper                                                                                                                                             │
│    49                                                                                                                                                                            │
│                                                                                                                                                                                    │
│ ╭─────────────────────────────── locals ───────────────────────────────╮     │
│ │     args = (                                                                                                                                                        │     │
│ │            │   'hi',                                                                                                                                                   │     │
│ │            │   'text-davinci-003',                                                                                                                             │     │
│ │            │   2048,                                                                                                                                                │     │
│ │            │   '###################################################'                                             │     │
│ │            )                                                         │     │
│ │     func = <function openai_request at 0x7f1c5ba98ca0>               │     │
│ │   kwargs = {}                                                        │     │
│ │ progress = <rich.progress.Progress object at 0x7f1c5ae88340>         │     │
│ │     text = <rich.progress.TextColumn object at 0x7f1c5baac340>       │     │
│ ╰──────────────────────────────────────────────────────────────────────╯     │
│                                                                              │
│ /home/usr/.local/lib/python3.8/site-packages/sgpt.py:60 in               │
│ openai_request                                                               │
│                                                                              │
│    57 │   │   "max_tokens": max_tokens,                                      │
│    58 │   }                                                                  │
│    59 │   response = requests.post(API_URL, headers=headers, json=data, time │
│ ❱  60 │   response.raise_for_status()                                        │
│    61 │   return response.json()["choices"][0]["text"]                       │
│    62                                                                        │
│    63                                                                        │
│                                                                              │
│ ╭───────────────────────────────── locals ─────────────────────────────────╮ │
│ │    api_key = '###################################################'       │ │
│ │       data = {                                                           │ │
│ │              │   'prompt': 'hi',                                         │ │
│ │              │   'model': 'text-davinci-003',                            │ │
│ │              │   'max_tokens': 2048                                      │ │
│ │              }                                                           │ │
│ │    headers = {                                                           │ │
│ │              │   'Content-Type': 'application/json',                     │ │
│ │              │   'Authorization': 'Bearer                                │ │
│ │              ###################################################'        │ │
│ │              }                                                           │ │
│ │ max_tokens = 2048                                                        │ │
│ │      model = 'text-davinci-003'                                          │ │
│ │     prompt = 'hi'                                                        │ │
│ │   response = <Response [500]>                                            │ │
│ ╰──────────────────────────────────────────────────────────────────────────╯ │
│                                                                              │
│ /home/usr/.local/lib/python3.8/site-packages/requests/models.py:1021 in  │
│ raise_for_status                                                             │
│                                                                              │
│   1018 │   │   │   )                                                         │
│   1019 │   │                                                                 │
│   1020 │   │   if http_error_msg:                                            │
│ ❱ 1021 │   │   │   raise HTTPError(http_error_msg, response=self)            │
│   1022 │                                                                     │
│   1023 │   def close(self):                                                  │
│   1024 │   │   """Releases the connection back to the pool. Once this method │
│                                                                              │
│ ╭───────────────────────────────── locals ─────────────────────────────────╮ │
│ │ http_error_msg = '500 Server Error: Internal Server Error for url:       │ │
│ │                  https://api.openai.com/v1/compl'+6                      │ │
│ │         reason = 'Internal Server Error'                                 │ │
│ │           self = <Response [500]>                                        │ │
│ ╰──────────────────────────────────────────────────────────────────────────╯ │
╰──────────────────────────────────────────────────────────────────────────────╯
HTTPError: 500 Server Error: Internal Server Error for url: 
https://api.openai.com/v1/completions

ImportError after installing on MacOS Ventura

Installed it by instructions (pip install shell-gpt --user) and getting this^

Traceback (most recent call last):
  File "/Users/divan/Library/Python/3.9/bin/sgpt", line 5, in <module>
    from sgpt import entry_point
  File "/Users/divan/Library/Python/3.9/lib/python/site-packages/sgpt.py", line 9, in <module>
    from utils import loading_spinner
ImportError: cannot import name 'loading_spinner' from 'utils' (/Users/divan/Library/Python/3.9/lib/python/site-packages/utils/__init__.py)

Implement API key reset option

Implement an option to reset an API key using command line argument --reset-key. When user provides this flag delete file located in ~/.config/shell-gpt/api_key.txt and ask for new API key. Additionally impement API key validation. Since OpenAI doesn't have dedicated endpoint just for key validation, call https://api.openai.com/v1/completions with some test prompt, make sure response status code is not 403, if so, ask user for valid API key.

[Possible issue] Different answers from chatGPT

Hello, I made a prompt to manage a nixOS system:

Query: {insert query in script here}

Hi chatGPT. This prompt is part of a script used to manage a nixOS system. The query above is what the user asked from you. If the query is an action that can be done via the command line, such as updating the system, just return the command(s) to run in a shell, and no extra comments. Else, write `echo {answer}`, without the '`' and replacing {answer} with a short answer, of maximum one sentence, to what the user asked. The command for updating the system is `sudo nixos-rebuild switch && nix-env -u`, without the '`'. The command for installing a package is `nix-env -iA {package name}`, without the '`' and replacing {package name} with the name of the package(s). If the query is `--help`, you will introduce yourself like you would answer a normal question (`echo {answer}`). It is very important to precisely follow these guidelines, otherwise the script will not work.

When I replace the query with 'upgrade the system', chatGPT gives me the actual command. However, the script gives me:

$ ./gpt-manager.sh update the system
\\n

echo "Hello, I'm ChatGPT. To update your system, use the command `sudo nixos-rebuild switch && nix-env -u`, without the \\'`\\'. To install a package, use the command `nix-env -iA {package name}`, without the \\'`\\' and replacing {package name} with the name of the package(s). If you need any help, let me know.

...which isn't even valid (the " is never closed). If I put it all in quotes:

./gpt-manager.sh "update the system"
echo "Hello, I'm ChatGPT. To update the system, please run the command 'sudo nixos-rebuild switch && nix-env -u', and to install a package, please run the command 'nix-env -iA {package name}', replacing {package name} with the name of the package you wish to install. For more help, just type --help and I'll be happy to assist you."

...which at least closes the ", but it doesn't properly escape the `. I know that this uses davinci and not chatGPT, but is there any way to make it act exactly like chatGPT?

ChatGPT implementation

OpenAI just published their API for ChatGPT models. Replace GPT-3 models with ChatGPT (GPT-3.5) for sgpt queries.

[Feature Request] Add ability to continue conversation base on previously ran commands

One big feature of ChatGPT is it can use relevant information based on previous prompts and responses, for example:

~
❯ sgpt --code "make an example request to localhost:3000 using python"
import requests

response = requests.get('http://localhost:3000')
print(response.text)

~
❯ sgpt --code "now that but with cache"
// Setting a cache
let cache = {};

// Function to be cached
function add(a,b){
    if (!cache[`${a},${b}`]) {
        cache[`${a},${b}`] = a + b;
    }
    return cache[`${a},${b}`];
}

Something like that should return the same code as before but probably using requests_cache. The best approach probably will be to add a new argument to --follow-conversation or something like that.

Thoughts?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.